The European Union (EU)'s Artificial Intelligence Act1 ("EU AI Act" or "Act") is a flagship initiative in the artificial intelligence (AI) sector, and is currently the world's only comprehensive regulatory framework addressing AI. The Act has been compared to the EU's trailblazing data protection law, General Data Protection Regulation (GDPR), and its influence shaping similar laws across the globe. The EU AI Act entered into effect across all 27 EU Member States on August 1, 2024, following its July 12th publication in the EU Official Journal; and most of its provisions will commence on August 2, 2026.
Background
The protection of fundamental human rights, as enshrined by the EU Charter of Fundamental Rights, is at the core of the Act, which "seeks to ensure the safety and trustworthiness of high-risk AI systems developed and used in the EU without hindering the development and financing of this burgeoning sector."
Drafted using a "future-proof approach"2 to allow its rules to adapt to the fast-evolving reality of the AI sector, the Act is the result of extensive negotiation, culminating in validation by the Council of the European Union on May 21, 2024. Despite some criticism by experts who argue that the Act fails to effectively address potential concerns raised by some AI systems, stringent penalties for non-compliance under the Act are intended as a formidable deterrent.
Scope of the Act
Since the EU AI Act aims to have a global impact, organizations both inside and outside the EU territory are justifiably concerned about the scope of this regulation and its potential impact on their operations.
Specifically, the Act applies to any company putting an AI system or GPAI model on the EU territory, including "providers" and "deployers" (as these terms are defined in Article 3 of the Act) that are based entirely outside the EU if the output of their AI system is used in the EU (e.g., distributors placing AI systems on the EU market and product manufacturers placing products using AI systems on the EU market under their brand)3.
Key Elements of the Act
- Definition of AI System
The Act defines an AI system (AIS) as a "machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, form the input it receives, how to generate outputs, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments"4. This broad definition builds in some flexibility under the law, enabling it to respond to technological advancements and new situations arising from them. - Risk Classifications
Although the Act does not explicitly provide a risk classification structure, it does refer to AI systems in terms of the level of risk they pose (e.g., unacceptable risk, high risk, limited risk, and minimal risk). This framework is intended to support legislators in addressing future challenges without stifling innovation. - Unacceptable Risk
In keeping with the EU's basic approach to risk, any risk that is contrary to EU values is considered unacceptable. Therefore, if an AI system is flagged as having an unacceptable risk level, all marketing, commissioning, or use of such a system is then prohibited. For example, "practices that pose a significant risk of manipulating individuals through subliminal techniques acting on their unconscious" or exploiting the vulnerabilities of vulnerable groups, such as children or people with disabilities would be prohibited under this framework. - High-Risk AI Systems
Under the Act, AI systems that create a high risk to the health and safety or fundamental rights of natural persons are prohibited, unless they comply with certain mandatory requirements and an ex-ante conformity assessment. Classification as a "high-risk AI system" depends on both the function performed by and the specific purpose of the system5, such as6:
- Critical infrastructures (e.g., transport) that could put the life and health of citizens at risk;
- Educational or vocational training that may determine access to education and the professional course of someone's life (e.g., scoring of exams);
- Safety components of products (e.g., AI application in robot-assisted surgery);
- Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts); and
- all remote biometric identification systems are considered
high-risk and subject to strict requirements.
High-risk AI systems will only be allowed on the EU market if the following obligations are fulfilled: - Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing all necessary information on the system and its purpose for authorities to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimize risk;
- High level of robustness, security, and accuracy.
- Limited and Minimal Risk AI Systems
AI systems that are considered limited risk will be subject to transparency obligations under the Act7, meaning that such systems must be designed and developed so that users are informed they are interacting with an AI system unless it is evident from the circumstances and context of use (e.g., as with chatbots on a website). Finally, AI systems with minimal risk, which, it is important to note, constitute the majority of AI systems, do not presume any obligations. - Penalties and Enforcement
Under the Act8, a deterrent sanction system imposes penalties for noncompliance calculated as either a flat €35,000,000 or a percentage (up to 7%) of the total annual worldwide revenue in the previous financial year, whichever is higher. SMEs and startups will face proportionate administrative fines.
Likewise, to ensure strong AI governance, the European Commission has proposed the establishment of a European Artificial Intelligence Committee, with which the competent national authorities of the different member states will cooperate. This will involve multi-level governance, as the different EU member states will also have their own national authority.
The AI Act also provides for an AI Office within the European Commission to enforce common rules within the EU territory, a scientific panel of independent experts, and a consultative forum to provide technical expertise to the committee and the Commission.
Next Steps
AI is a rapidly evolving sector of technology helping to drive positive change in areas such as climate change, environmental issues, and health-related challenges. However, like all technology, AI systems also come with risks requiring regulation for the protection of business and individuals alike. Clients operating in this space may wish to take the following steps to prepare for compliance under the Act:
- Map use of AI systems, including use by any of the company's service providers;
- Create risk assessments in accordance with the EU AI Act risk definitions;
- Review contracts with AI system vendors to ensure liability is apportioned accordingly; and
- Review their insurance policies to ensure they cover the use of AI.
Footnotes
2. EU AI Act, explanatory note, point 3.5, p. 11
3. EU AI Act, Article 2
4. EU AI Act, Article 3(1)
5. EU AI Act, Article 6
6. Annex III of the Act
7. EU AI Act, Chapter IV
8. EU AI Act, Article 99
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.