Open Letter Signed By Elon Musk Asks for 6-Month AI Pause
Future of Life Institute (FLI) published an open letter urging all AI labs to pause experiments “more powerful than GPT-4” for at least six months. The letter asks AI leaders to focus on developing safety protocols instead. The letter has received wide support from high-profile individuals. Elon Musk (CEO of SpaceX, Tesla, and Twitter), Steve Wozniak (Apple co-founder), and Tristan Harris (Executive Director of the Center for Humane Technology) are among the biggest names on the list. The FLI is a nonprofit company whose mission is “steering transformative technologies away from extreme, large-scale risks and towards benefiting life.” In the open letter, the group expressed its concern over the negative effects powerful AI technologies can have on society and humanity. In part, the letter reads, “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” The letter further criticizes the lack of planning and management in developing and deploying human-competitive intelligence. More precisely, its criticism is toward AI labs that race to “develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Although the letter doesn’t explicitly mention ChatGPT’s creator OpenAI in this context, it urges the need to limit “the rate of growth of compute used for creating new models” and asks for those limitations to take place now. If companies don’t pause giant AI experiments soon, the letter urges governments to institute a moratorium. FLI’s site shows close to 4,500 signatories at the time of writing, but the nonprofit added a note that it has collected over 50,000 signatures so far. The signature collection process is still active, allowing anyone to sign the petition through a form on the page. Leading AI researchers have long expressed their concerns over the rapid developments in the AI field. Eliezer Yudkowsky, a scholar who has been in Artificial General Intelligence since 2001 and is best known for popularizing the idea of friendly artificial intelligence, predicts that “literally everyone on Earth will die” if AI development doesn’t shut down anytime soon. No one from OpenAI or Anthropic, a nonprofit company founded by former OpenAI researchers, has supported the letter yet.