On Friday, 8 December 2023, European Union lawmakers concluded a provisional agreement on the much-anticipated Artificial Intelligence Act (AI Act). The AI Act – a broad new piece of legislation governing the development, placing on the market and use of AI systems – will apply beyond the EU's borders and is the first extensive law in the world on AI.

The text of the provisional agreement on the AI Act will be available soon. In the meantime, the press releases of the  EU Parliament European Council and  European Commission confirm that several relevant changes to the European Commission's original AI Act proposal from 2021 were included in previous weeks after intense negotiations (e.g., new requirements to conduct a fundamental rights impact assessment for certain AI systems, a revised definition of AI and more stringent rules on high-impact foundation model providers).

The AI Act consolidates the EU's risk-based approach for regulating AI. The higher the risk that an AI system poses to health, safety or fundamental rights, the stricter the rules. The AI Act establishes three categories of AI systems:

1. Unacceptable risk:  AI systems considered to be a clear threat to the fundamental rights of people will be banned. The ban will cover the following AI systems:

  • Biometric categorization systems that use sensitive characteristics (e.g., political, religious and philosophical beliefs, sexual orientation, and race).
  • Untargeted scraping of facial images from the internet or closed-circuit television footage to create facial recognition databases.
  • Emotion recognition in the workplace and educational institutions.
  • Social scoring based on social behavior or personal characteristics.
  • AI systems that manipulate human behavior to circumvent their free will.
  • AI used to exploit the vulnerabilities of people (due to age, disability, or social or economic situation). 

2. High risk:  AI systems will be classified as high risk due to their significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law. Examples of high-risk AI systems include certain critical infrastructures – for instance, in the fields of water, gas, and electricity, medical devices, and systems for recruiting people. Certain systems used in the fields of law enforcement, border control, and administration of justice and democratic processes also will be classified as high risk. High-risk AI systems must undergo mandatory fundamental rights impact assessments and will be required to comply with strict requirements – including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information and human oversight. AI systems will be subject to strict obligations before they can be put on the market, such as pre-deployment conformity assessments, record-keeping obligations and/or mandatory impact assessments that can be scrutinized by national authorities to assess compliance with the AI Act. 

3. Minimal risk:  Most AI systems are expected to fall into the category of minimal risk. Minimal-risk applications – such as AI-enabled recommender systems or spam filters – will benefit from a free pass and absence of several obligations, as these systems present only minimal or no risk to citizens' rights or safety.

Risk level Definition Examples Requirements
Unacceptable AI systems considered a clear threat to the fundamental rights of people Biometric categorization systems, untargeted scraping of facial images, emotion recognition, social scoring, AI systems that manipulate human behavior and AI used to exploit vulnerabilities  Banned 
High AI systems with significant potential harm to health, safety, fundamental rights, the environment, democracy and the rule of law  Critical infrastructures, medical devices, systems for recruiting, and systems used in law enforcement, border control, and administration of justice and democratic processes  Mandatory fundamental rights impact assessments, strict requirements, risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, pre-deployment conformity assessments, record-keeping obligations and mandatory impact assessments 
Minimal Most AI systems with only minimal or no risk to citizens' rights or safety  AI-enabled recommender systems and spam filters  Free pass and absence of several obligations 

Moreover, the AI Act introduces transparency requirements when employing AI systems – for example, when users should be aware that they are interacting with a chatbot. Deep fakes and other AI-generated content will have to be labelled as such, and users need to be informed when biometric categorization or emotion recognition systems are being used.

The AI Act also introduces guardrails for general-purpose AI. Under these rules, general-purpose AI models will require transparency along the value chain. For models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, as well as performing model evaluation and adversarial testing.

As far as enforcement of the EU AI rules is concerned, noncompliance with the AI Act can lead to fines ranging from 7.5 million euros, or 1.5% of global turnover, to 35 million euros, or 7% of global turnover, depending on the infringement. Also, the provisional agreement provides more proportionate caps on administrative fines for small and medium-sized enterprises (SMEs) and startups.

The national competent authorities will supervise implementation of the AI Act on the national level. To ensure a harmonized implementation, the European Commission will introduce a European AI Office, which will ensure coordination at the European level. Furthermore, the new AI Office also will supervise the implementation and enforcement of the new rules on general-purpose AI models. Along with the national market surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI.

Next steps 

The final text of the AI Act will be subject to a final vote by the European Parliament and European Council in early 2024. Once the AI Act is adopted, there will be a transitional period before all the obligations in the AI Act will apply, which will likely be in early 2026.

This is just a snapshot of the European AI legislative landscape. Once the provisional agreement is published, Cooley's cyber/data/privacy team will follow up with a more detailed post, explaining what organizations placing AI systems on the market or putting AI systems into service in the EU need to know. 

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.