The European Union has taken a pioneering step by approving the first comprehensive legislation to regulate artificial intelligence, a move poised to influence global standards in the tech industry. Nearly three years after the draft rules were introduced, the European Parliament ratified the AI Act with a decisive 523 votes for, 46 against, and 49 abstentions. This legislative milestone is set to solidify the EU’s position at the forefront of AI governance and establish a framework that could resonate worldwide.
The AI Act operates on a risk-based framework, categorizing AI applications into four distinct risk categories—unacceptable, high, limited, and low. Under the new regulations, certain uses of AI are outright banned due to their potential to infringe on citizens’ rights. Among these prohibited applications are biometric categorization systems sensitive to characteristics, untargeted scraping of facial images, and social scoring mechanisms. Additionally, law enforcement’s use of real-time remote biometric identification (RBI) systems is generally forbidden, with narrowly defined exceptions that require strict safeguards.
Dragos Tudorache, a key figure in the negotiations, noted that the legislation sets a new governance model around technology, ensuring that “artificial intelligence to the fundamental values that form the basis of our societies.” The act is touted not only as a protective measure for fundamental rights, democracy, and the rule of law but also as an innovation catalyst aiming to promote transparent, safe, and accountable AI systems.
High-risk AI systems will be subject to stringent obligations, including risk assessments, use logs, and transparency measures to ensure human oversight. The law also mandates the labeling of AI-generated images, videos, and audio – the so-called deepfakes – as artificially manipulated, thus addressing the growing concern over disinformation.
For generative AI models deemed to pose systemic risks, like OpenAI’s GPT-4 or Google’s Gemini, additional requirements are specified. These include performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Furthermore, to nurture innovation, the EU will establish regulatory sandboxes and real-world testing environments, particularly accessible to SMEs and startups.
The AI Act’s reach extends beyond the EU borders, affecting any AI system operating within the bloc. This means US-based AI providers, like OpenAI, will have to comply with the EU standards within Europe. To enforce compliance, each member country will establish its AI watchdog and collaborate with a central AI Office set by the European Commission.
Penalties for non-compliance are severe, with fines reaching up to €35 million or up to seven percent of a company’s global annual turnover, whichever is higher. This stringent approach mirrors the EU’s broader regulatory strategy for the tech industry, which often sets a benchmark for global regulation.
The regulation is currently undergoing a final review by a lawyer-linguist and is anticipated to be officially approved prior to the conclusion of the legislative session, utilizing the corrigendum procedure. Additionally, the law must receive formal endorsement from the Council.
Relevant articles:
– EU regulators pass the planet’s first sweeping AI regulations
– Artificial Intelligence Act: MEPs adopt landmark law, European Parliament, Wed, 13 Mar 2024 12:14:11 GMT
– The E.U. Has Passed the World’s First Comprehensive AI Law, TIME, Wed, 13 Mar 2024 14:50:09 GMT
– Haiti’s political transition faces challenges; EU approves AI regulation legislation, NPR, Thu, 14 Mar 2024 11:59:09 GMT