What You Need To Know:
- Extraterritorial Scope: The European Union Artificial Intelligence Act1 (the Act) applies to AI Systems placed on the EU market or used in the EU by or on behalf of companies located throughout the world. In the U.S., even prior to its enactment the Act influenced President Joe Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (October 30, 2023)2 and state AI laws, and is now often cited for AI "best practices."
- Risk-Based Approach: The Act organizes AI Systems into categories based on the assessed risk to natural persons, i.e., Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk, and imposes regulations intended to mitigate, or even eliminate, the risks appliable to each category.
- Development and Distribution Channels: The Act not only strictly regulates companies that develop and market AI Systems but imposes continuing obligations on the entities that deploy and/or use the AI Systems.
- Enforcement and Penalties: The EU has a robust enforcement structure for the Act, with substantial penalties that are often calculated as a percentage of the entity's global annual turnover.
On August 2, the European Union Artificial Intelligence Act (the Act) became effective. It is the world's first comprehensive legal framework for regulating artificial intelligence (AI) and aims to ensure that AI products and services (AI Systems) developed, marketed, or used in the European Union (EU) reflect European values and protect the fundamental rights of natural persons, such as privacy, data protection, and nondiscrimination. By providing legal certainty on regulatory matters, the European Commission also sought to promote investment and innovation in the EU's fledgling AI industry. Most of the Act's provisions will be phased in over a period of two years.
Who Are the Regulated Entities?
The Act applies to all parties involved in the development, use, import, distribution, or manufacturing of AI Systems, specifically Providers, Deployers, Importers, and Distributors. If an AI System is being used by a natural person for their own personal benefit, it is not regulated by the Act.
- Providers3 are entities that develop and place AI Systems on the EU market or the output of whose branded AI Systems is used in the EU. The Act reserves its most stringent requirements for Providers.
- Deployers4 or Users are typically commercial entities that use AI Systems. They are subject to the Act if (i) they are located or established in the EU or (ii) the output of the AI System is used in the EU.
- Importers5 are entities that (i) are located or established in the EU and (ii) offer AI Systems in the EU under the brand of a non-EU entity. They are subject to the Act by virtue of being located or established in the EU.
- Distributor6 means any entity in the supply chain that makes an AI System available on the EU market but is not a Provider or an Importer. A Distributor is subject to the Act if it (i) is located or established in the EU, or (ii) puts AI on the market in the EU.
AI Systems Ranked According to Risk
Within the broad category of AI,7 the Act identifies four risk levels:
- Unacceptable Risk: Effective February 2, 2025,
AI Systems that present an unacceptable risk to the fundamental
rights of natural persons are banned in the EU. The Act maintains a
lengthy list of prohibited AI Systems,8 including AI
Systems that:
- Use subliminal, manipulative, or deceptive techniques to distort the behavior or informed decision-making of individuals or groups of people's behavior and impair informed decision-making;
- Exploit vulnerabilities due to age, disability, or social or economic situations;
- Create or expand facial recognition databases through untargeted scraping from the internet or CCTV footage; or
- Assess the risk of individuals committing criminal offenses based solely on profiling or personality traits and characteristics.
- High Risk: AI Systems that pose a significant
risk of harm to health, safety, or fundamental rights are High Risk
and subject to strict, mandatory requirements under the
Act.9
- High-Risk AI Systems generally fall within two categories: (i) AI Systems that constitute a safety component of a product or are otherwise subject to EU health and safety harmonization legislation, and/or (ii) AI Systems that are used in at least one of the following eight specific areas: biometric systems (such as AI Systems used for emotion recognition and certain biometric identification categorization systems); critical infrastructure; education and vocational training (including admissions and evaluations); employment (including recruitment, reviews, promotion, and termination); essential public and private services (such as health care, state benefits, emergency services, and credit ratings); law enforcement; migration, asylum, and border control management; and the administration of justice and democratic processes.
- An AI System that falls in any of these categories may not be High Risk if it is intended for use in certain narrow, procedural, preparatory, or pattern-spotting tasks.
- Most High-Risk AI Systems will need to comply with the Act by August 2, 2026.
- See below for further discussion of Providers, Deployers, Importers, and Distributors and their respective obligations regarding High-Risk AI Systems.
- Limited Risk: AI Systems designed to interact directly with natural persons that do not fall in the above categories are classified as Limited Risk. The Act emphasizes the importance of transparency and appropriate disclosures when individuals engage with Limited-Risk AI Systems and specifically requires transparency for certain applications such as chatbots or systems that generate or manipulate content (aka "deep fakes").10
- Minimal Risk: AI Systems deemed to present little to no risk are classified as Minimal Risk. The Act does not impose any mandatory requirements or restrictions on Minimal-Risk AI Systems, such as spam filters.
Determining the appropriate category for an AI System is likely to present some challenges. In 2023, the Initiative for Applied Artificial Intelligence evaluated more than 100 AI Systems used in a corporate context.11 The final report concluded that 1 percent of the AI Systems were banned, 18 percent of the AI Systems were "High Risk," and 42 percent were "low risk." For a variety of reasons, including the need for detailed information regarding usage practices, the authors could not definitively classify the remaining 40 percent of AI Systems. However, the report pointed out that in certain areas, such as HR, the percentage of High-Risk systems could be much higher. In a 2022 survey about the use of AI by German companies, one of the main concerns cited was "violations of data protection regulations," and nearly 50 percent of the surveyed companies mentioned that "uncertainty due to legal hurdles" was a barrier to their use of AI.12
Providers, Deployers, et al., and High-Risk AI Systems
Providers of High-Risk AI Systems must comply with the most extensive obligations under the Act, while Deployers, Importers, and Distributors have their own, more limited requirements. However, Deployers, Importers, and Distributors may become directly responsible for the AI System's compliance with the Act (which is typically the responsibility of Providers) if they put their own brand on the AI System or make substantial changes to it. Among other responsibilities,13 Providers of High-Risk AI Systems must:
- Design the system for appropriate levels of accuracy, robustness, and cybersecurity;
- Register the High-Risk AI System on a publicly available database;
- Conduct and document a Conformity Assessment (including technical documentation) and update as necessary;
- Provide instructions for downstream Deployers to enable their compliance with the Act;
- If the Provider is established outside the EU, appoint an authorized representative in the EU;
- Report serious incidents to regulators within the stated time frames; and
- Implement comprehensive data governance, quality, and risk
management systems, and ensure human oversight.
If a Provider determines that an AI System is not High Risk, the Act requires a documented assessment before placing that system on the EU market or putting it into service. The European Commission has committed to providing guidelines for practical implementation and a list of practical examples of AI Systems that are High Risk and use cases that are not.
Deployers of High-Risk AI Systems have the following obligations, among others:
- Use the AI System according to the Provider's instructions, and ensure human oversight by competent individuals;
- Ensure that Deployer-supplied input data is relevant and sufficiently representative;
- Conduct an impact assessment if a High-Risk AI System is used to provide credit checks, quotes for life insurance, or public services;
- On request from an affected individual, provide a reasoned explanation of the decisions made using AI System that have a significant effect on them;
- Monitor the operation of the AI System and report promptly to the Provider and regulators any unacceptable risks to health and safety and fundamental rights, as well as any serious incidents;
- Keep automatically generated logs; and
- Be transparent about the use of High-Risk AI Systems that are
deployed in the workplace or used to make decisions about
individuals.
Providers and Deployers of all AI Systems (including Minimal-Risk AI Systems) are subject to general transparency requirements, the obligation to ensure a sufficient level of AI literacy among members of their workforce that interact with AI, and the obligations under other laws such as the General Data Protection Regulation (GDPR).Importers and Distributors must conduct diligence on the compliance of the AI System with the requirements of the Act. If they have reason to believe that the AI System does not comply with the Act, they cannot release it on the EU market.
General-Purpose AI Models
General-Purpose AI Models (GPAI Models)14 are AI
models that are capable of performing myriad tasks and that can be
integrated into a wide range of systems or applications. GPAI
Models were included in the Act late in the drafting process in
response to the growing popularity of generative AI tools.
Generally, GPAI Models are components of AI Systems and are not
separate or stand-alone AI Systems.
The Act distinguishes between high-impact GPAI Models with systemic
risk (i.e., for which the amount of computation used for training
is high) and other GPAI Models and imposes different compliance
obligations on each category.15 All Providers of GPAI
Models are required to comply with certain basic transparency
obligations, and Providers of high-impact GPAI Models have
additional obligations concerning risk assessment and mitigation,
cybersecurity, documentation, and reporting. The European
Commission has committed to publish and update a list of GPAI
Models that have systemic risk.
Penalties and Enforcement
Penalties under the Act can be significant depending on the nature of the violation. The highest penalties apply to the use of banned AI Systems and can reach EUR 35 million or 7 percent of an entity's global annual turnover. Startups and small/medium-sized enterprises are subject to the same maximum percentages or amounts as other offenders, but whichever is lower. As under the GDPR, individuals and legal persons may also lodge infringement complaints with the relevant authority under the Act to report instances of noncompliance or to request an explanation of individual decision-making.
Next Steps
The Act's extraterritorial scope affects organizations globally, and it is critically important not to make assumptions or rush to judgment about whether the Act applies to your business. Now is an opportune time to:
- Inventory your AI Systems: Understand the purpose and function of AI Systems in development, currently in use, and that you intend to procure.
- Audit your AI Systems:
- Assess the risk level of each AI System and document it.
- If a banned AI System is identified in any context, alert top management.
- Map the necessary compliance steps for High-Risk AI Systems,
including:
- Governance structures and ethics frameworks for AI development and use; and
- Comprehensive training of all stakeholders.
- Establish or enhance data governance,
including:
- Data protection by design and by default;
- Data quality and mitigation of biases and other potential harms; and
- AI privacy impact assessments.
AI regulation in the EU and the U.S. as well as globally is complex and evolving quickly. Key stakeholders should be aware of the Act and its potential implications for their business and diligently monitor new regulatory developments. While a compliance road map is largely based on current obligations and foreseeable plans, in this environment, management should also remain flexible to ensure that appropriate adjustments are implemented as new laws and technological innovations emerge.
Footnotes
1 .https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689.
2. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence; https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.
3. Article 3 (3) of the Act.
4. Article 3 (4) of the Act.
5. Article 3 (6) of the Act.
6. Article 3 (7) of the Act.
7. To distinguish AI Systems from other software, Article 3 (1) of the Act defines an AI System as "a machine-based system [...] designed to operate with varying levels of autonomy [...] that may exhibit adaptiveness after deployment, and [...] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
8. See Article 5 of the Act.
9. See Article 6 and Annex III of the Act.
10. See Article 50 of the Act.
11. https://aai.frb.io/assets/files/AI-Act-Risk-Classification-Study-appliedAI-March-2023.pdf.
12. Achim Berg: Künstliche Intelligenz – Wo steht die deutsche Wirtschaft?, Bitkom, 2022, available here.
13. See Chapter III Section 2 of the Act and related Annexes.
14. Article 3 (63) of the Act.
15. See Chapter V of the Act.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.