Artificial intelligence (AI) is one of the most powerful and transformative technologies of our time. It has the potential to solve some of the world’s most pressing problems, such as climate change, poverty, and disease. But it also poses a serious threat to humanity if it is not aligned with our values and goals.
That’s the message of a new statement co-signed by dozens of AI experts and leaders, including Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, and two Turing Award winners Geoffrey Hinton and Yoshua Bengio. The statement, published by the Center for AI Safety, a San Francisco-based non-profit organization that advocates for AI safety research, says:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The statement is meant to raise awareness and urgency about the potential dangers of AI, especially as systems become more powerful and autonomous. The signatories warn that AI could pose an existential threat to humanity if it is not designed and deployed with care and caution.
The statement echoes previous warnings by other prominent figures in the AI field, such as Elon Musk, Stephen Hawking, and Nick Bostrom. They have argued that AI could surpass human intelligence and capabilities, and that we may not be able to control or understand its decisions and actions. They have also called for more research and regulation to ensure that AI is beneficial and ethical for humanity.
However, the debate over AI risk is complicated and controversial. Some experts have dismissed the idea of AI extinction as unrealistic or sensationalist, and have argued that we should focus on more immediate and tangible challenges, such as algorithmic bias, misinformation, privacy, and security. They have also pointed out that AI has many positive applications and opportunities for society, such as education, health care, entertainment, and innovation.
The statement acknowledges that AI risk is not the only or the most urgent problem facing humanity, but that it should not be ignored or underestimated. It also does not preclude society from addressing other types of AI risk, such as algorithmic bias or misinformation.
The statement urges governments, industry, academia, and civil society to work together to ensure that AI is developed and used in a safe and responsible manner. It also calls for more funding and support for AI safety research, which aims to create AI systems that are aligned with human values and goals.
The statement concludes by saying:
“We hope this statement will encourage more people to join us in thinking seriously about how we can ensure that AI serves humanity rather than harms it.”
– Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement, The Verge, May 30, 2023
– AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war, ABC News, May 30, 2023
– AI industry and researchers sign statement warning of ‘extinction’ risk, CNN Business, May 30, 2023
– Runaway AI Is an Extinction Risk, Experts Warn, WIRED, May 30, 2023