In a landmark move, leading AI companies including behemoths like Microsoft, Google, Amazon, and OpenAI, along with 10 nations and the European Union, have convened to establish a collective “kill switch” policy. This pact is aimed at halting the development of their most advanced AI models if they surpass certain risk thresholds. This agreement, reached at a summit in Seoul, signifies a crucial step towards ensuring AI remains a boon rather than a threat to humanity.
The idea of an AI “kill switch” has long been discussed in the context of preventing a doomsday scenario akin to the “Terminator scenario,” where AI surpasses human intelligence and potentially acts against human interests. This concept is now moving from science fiction to a tangible policy. As quoted from the policy paper signed by the companies, “In the extreme, organizations commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds.”
The summit in Seoul, a follow-up to the Bletchley Park AI Safety Summit, was not just a venue for lip service but a call to action. While critics of last year’s summit labeled it “worthy but toothless” due to a lack of enforceable commitments, the latest gathering has led to a more concrete pledge. However, the effectiveness of the agreement remains in question as it lacks legal weight and a clear definition of risk thresholds, not to mention it does not bind AI companies that were not present at the summit.
AI’s potential to revolutionize our economy and address grand challenges is unquestionable. Yet, as U.K. Technology Secretary Michelle Donelan emphasized, this potential can only be realized if we tackle the accompanying risks. OpenAI CEO Sam Altman concurs, indicating the advent of artificial general intelligence (AGI) is near and acknowledging its potential for misuse and accidents. According to an OpenAI blog post, “AGI would also come with serious risk of misuse, drastic accidents, and societal disruption.”
Despite efforts to establish global regulatory frameworks, most remain scattered and without legislative authority. For instance, a UN policy framework addressing AI risks to human rights was approved but nonbinding. Similarly, the Bletchley Declaration contained no real regulatory commitments.
In response to the regulatory vacuum, AI companies have started forming organizations like the Frontier Model Foundation, seeking to advance the safety of frontier AI models. The foundation, albeit lacking firm policy proposals, includes Amazon and Meta among its recent members.
Governments are also making strides. President Biden’s executive order on AI safety was commended for its proactive stance, mandating AI companies to share safety test results with the government. Additionally, the EU and China have implemented formal policies on issues such as copyright and data privacy.
States are not lagging behind, with Colorado Governor Jared Polis announcing legislation to ban algorithmic discrimination in AI and ensure developer compliance with state regulations through data sharing requirements.
Relevant articles:
– Tech companies have agreed to an AI ‘kill switch’ to prevent Terminator-style risks, fortune.com, 05/27/2024
– AI companies make fresh safety promise at Seoul summit, nations agree to align work on risks, ABC News, 05/24/2024
– In Seoul summit, heads of states and companies commit to AI safety, Yahoo News UK, 05/24/2024
Glad you enjoyed above story, be sure to follow TrendyDigests on Microsoft Start.