Over a thousand notable technology pioneers and researchers, including the likes of Elon Musk, have collectively called for a temporary halt to the development of the most sophisticated artificial intelligence systems. In an open letter, they caution that such AI tools pose "profound risks to society and humanity." The letter, released on Wednesday by the nonprofit Future of Life Institute, warns that AI developers are embroiled in a relentless and unmanageable race to create and deploy increasingly potent digital minds that not even their creators can fully comprehend, anticipate, or dependably control.
Other signatories of the letter include Apple co-founder Steve Wozniak, entrepreneur and 2020 presidential candidate Andrew Yang, and Rachel Bronson, president of the Bulletin of the Atomic Scientists, which oversees the Doomsday Clock.
The Double-edged Sword of AI Advancements
Gary Marcus, an entrepreneur and academic who has long criticized flaws in AI systems, stated in an interview, "These things are shaping our world. We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation, and a huge number of unknowns." AI currently powers chatbots like ChatGPT, Microsoft's Bing, and Google's Bard, which can engage in human-like conversations, craft essays on a limitless range of subjects, and execute more intricate tasks like writing computer code.
The quest for more powerful chatbots has ignited a competitive race, likely determining the tech industry's future leaders. However, these tools have been criticized for inaccuracies and their capacity to disseminate misinformation.
The open letter specifically requests a pause in developing AI systems more potent than GPT-4, the chatbot unveiled this month by OpenAI, co-founded by Musk. This temporary halt would provide an opportunity to implement "shared safety protocols" for AI systems, according to the letter.
"If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," it added. The letter emphasized that the development of powerful AI systems should progress "only once we are confident that their effects will be positive and their risks will be manageable." It also suggested that "Humanity can enjoy a flourishing future with AI," adding that having created potent AI systems, we can now savor an "AI summer" to enjoy the benefits, tailor these systems for the unambiguous advantage of everyone, and allow society time to adapt.
Sam Altman, the CEO of OpenAI, did not sign the letter. Marcus and others believe that convincing the broader tech community to agree on a moratorium would be an uphill battle. Moreover, swift government action seems improbable, as lawmakers have demonstrated little initiative in regulating artificial intelligence.
The Challenges and Potential Dangers Ahead
Even though some letter signatories are known for repeatedly voicing concerns that AI could annihilate humanity, others, like Marcus, are more worried about its immediate dangers.
These include the propagation of disinformation and the risk that people will rely on these systems for medical and emotional guidance. "The letter shows how many people are deeply worried about what is going on," said Marcus, who also signed the letter.
He believes the letter will be a turning point, stating, "I think it is a really important moment in the history of AI — and maybe humanity." However, he acknowledged that persuading the wider community of companies and researchers to implement a moratorium could be challenging. "The letter is not perfect," he admitted, "But the spirit is exactly right."