OpenAI, the company behind the popular AI chatbot ChatGPT, has warned that it might leave the European market if the EU imposes strict regulation on its large language models. The EU is finalizing a new law that would classify such systems as high-risk and require them to meet certain safety and transparency standards.Embed from Getty Images
OpenAI CEO Sam Altman said he had “many concerns” about the EU AI Act, which is currently being revised by lawmakers. He said that both ChatGPT and its successor GPT-4 could be designated as high-risk under the proposed law, which would force the company to disclose details of its training methods and data sources.
“If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible,” Altman said at a talk in London on Tuesday, according to Time.
Altman’s comments contrast with his previous calls for the U.S. government to regulate AI in a way that supports innovation and protects human rights. He told Congress in March that he was “all for AI regulation” as long as it was “smart regulation.”Embed from Getty Images
The EU AI Act, which was first proposed in 2021, aims to create a legal framework for the development and use of AI in Europe. It would ban some AI applications that pose an unacceptable risk to fundamental rights, such as social scoring systems and manipulative social engineering AI. It would also impose strict requirements on high-risk AI systems that affect areas such as health, education, law enforcement and finance.
The law was recently expanded to include new provisions for “foundation models,” which are large-scale AI systems that power services like ChatGPT and DALL-E. These models are trained on massive amounts of data scraped from the web, some of which may be copyrighted or contain personal information. The law would require creators of foundation models to provide information about their design, data sources, quality and accuracy.
OpenAI has been reluctant to share such information, arguing that it would expose the company to potential lawsuits and competition. The company has also faced criticism for the ethical and social implications of its AI products, which have been accused of generating harmful or misleading content.
Altman said he hoped the EU would reconsider its approach and adopt a more supportive stance toward AI innovation. He said he believed that AI could bring tremendous benefits to humanity if used responsibly and ethically.
“I think Europe should be a leader in this,” he said. “I think Europe should be very pro-AI.”
- OpenAI CEO threatens to leave EU over AI regulation | The Independent | May 25, 2023
- OpenAI’s Sam Altman says he’ll quit Europe if it regulates AI too much | Wired UK | May 24, 2023
- Sam Altman warns EU not to stifle AI innovation with regulation | TechCrunch | May 23, 2023