Artificial Intelligence Poses ‘Profound Risks to Society And Humanity’
March 30, 2023—As computer scientists race to advance artificial intelligence technologies, tech and academic leaders are calling on AI laboratories to pause experiments on all products greater than one called GPT-4. Furthermore, they called on governments to intervene to force a stop if labs do not impose one.
A nonprofit organization called Future of Life published the letter on Wednesday. It warned:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs,” the letter says. “As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
Big-Name Sponsors
Some of the 1,377 signatories to the letter include Elon Musk, CEO of SpaceX, Tesla & Twitter, Steve Wozniak, co-founder, Apple, Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, Andrew Yang, a 2020 presidential candidate, Marc Rotenberg, president of the Center for AI and Digital Policy, and Emilia Javorsky, director of the Future of Life Institute.
The letter is open to public signatories as well.
Safety Protocols Needed
The signers say governments and technologists should develop a shared set of safety protocols. They called for a pause saying society has stopped other technologies posing “potentially catastrophic effects.”
Specifically, the signers say experiments on technologies more powerful than GPT-4 should stop. GPT-4 is an advanced AI system that uses language and reasoning to communicate. It is more advanced than a virtual chatbox called ChatGPT in that it “can draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch,” CNN reported in March.
AI Replacing Humans
A concern the signers raise is that AI could replace people in jobs. Another concern is misinformation in the age of big data.
Copyright secured by Digiprove © 2023 Patti Mohr“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”