Home Global leaders call for moratorium on AI research

Global leaders call for moratorium on AI research

March 29 – Over 1,000 technology leaders and researchers, including Steve Wozniak, are calling for a moratorium in regards to the development of powerful artificial intelligence systems, warning in an open letter that AI presents “profound risks to society and humanity.”

AI software engineers are embattled in an “…out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control,” says the letter, which was released by the nonprofit Future of Life Institute.

1,000 leaders and researchers

In addition to Steve Wozniak, co-founder of Apple, other notable and reputable signatories include Andrew Yang, entrepreneur and candidate in the 2020 U.S. presidential election, Rachel Bronson, the president of the Bulletin of the Atomic Scientists, Tristan Harris, of the Center for Humane Technology, and Yoshua Bengio, often referred to as one of the “godfathers of AI.”

World’s richest person, Elon Musk, has said that he believes that AI is one of the “biggest risks” to civilization. While previously a stakeholder in OpenAI, the company behind ChatGPT, Musk no longer holds a stake in the company.

Requested moratorium

The letter requests that technology leaders and AI labs cease training models more powerful than GPT-4, the latest version of the large language model software created by the startup known as OpenAI.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter read.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?…Such decision must not be delegated to unelected tech leaders.”

AI Ethics

In light of the weighty issues discussed in the letter, issues like phishing attempts, cyber crime, misinformation and plagiarism pale, but they too represent significant points of concern and potential abuses in regards to tools like ChatGPT and Bard.

Further information

Says Gary Marcus, an NYU professor who signed the letter, “The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications.”

The U.K. has unveiled proposals for an “adaptable” regulatory framework around AI, according to Reuters.

Critics of the letter

The letter’s critics accuse signatories of promoting “AI hype,” and argue that the claims around the technology’s potential are overblown. Some contend that greater transparency, regulation and dialogue are needed, rather than a general pause on GPT development and other forms of AI.

Get the full scoop from The New York Times. Want to stay up-to-date with trends in technology? Check out the CyberTalk.org newsletter! Sign up today to receive top-notch news articles, best practices and expert analyses; delivered straight to your inbox.