Key Points
- The EU AI Act went into effect on August 1, 2024, and applies to organizations globally.
- The Act requires classifying AI systems and mandates requirements for AI users, developers, providers, and distributors.
- The Act has specific requirements that carry severe penalties for non-compliance.
- Legal and compliance professionals have an obligation to ensure the compliance of their organizations' AI systems through a comprehensive program.
On August 1, 2024, the European Union's Artificial Intelligence Act went into effect, becoming the first regulatory framework to address the ethical, legal, and societal implications of AI systems. The Act requires organizations that develop or use AI systems to adhere to requirements based on classifying AI systems into different risk categories and requirements.1 Any organization that implements or uses AI systems in the EU, utilizes EU data, or has EU users of its AI is subject to this regulation.
Understanding the key provisions of the Act is crucial for legal and compliance professionals to ensure compliance. The penalties for non-compliance are noteworthy. The EU can impose an administrative fine of up to €35 million or 7 percent of worldwide annual gross revenue, whichever is greater.2 Given the mass adoption of AI by many organizations, the EU AI Act is significant to many organizations.
Classification of AI Systems
One of the EU AI Act's pillars is introducing a risk-based classification system for AI applications. The classifications are required for any AI system in use by an organization. The EU defines AI systems as:3
AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[.]”
This definition encompasses a wide array of different types of AI. While large language model systems have received most of the attention recently, the EU's definition captures the full range of AI systems, including predictive analytics and rules engines.
The classifications defined by the EU encompass four categories stratified by levels of risk. The highest risk category, unacceptable risk, is strictly prohibited without an exemption (e.g., affirmative exemptions for law enforcement). The remaining three categories are allowed, provided that the AI systems' management and usage comply with the regulations.
The four AI risk categories are:
- Unacceptable Risk: These are AI systems deemed to pose a threat to safety, fundamental rights, or societal values and are therefore prohibited.
- High Risk: These are AI systems that require strict compliance with regulatory requirements due to their potential impact on safety and fundamental rights.
- Transparency Risk: These systems necessitate transparency obligations to ensure users know they are interacting with AI.
- Minimal Risk: These are deemed low risk and subject to minimal regulatory oversight.
Roles in AI Systems
The EU AI Act lays out specific roles and responsibilities for various stakeholders involved in developing, deploying, and overseeing AI systems. The four roles defined in the Act are AI system providers, users, importers, and distributors.
The EU defines these roles as follows:4
provider means a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
deployer means a natural or legal person, public authority, agency, or other body using an AI system under its jurisdiction except where the AI system is used in the course of a personal, non-professional activity;
importer means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country;
distributor means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market;
Each role is designated with responsibilities. This is important because the responsibilities extend beyond the development of AI by extending to the users of the systems and those who bring AI into the EU and implement it.
Providers are primarily responsible for ensuring their AI systems comply with the Act's requirements. This includes conducting thorough risk assessments, maintaining technical documentation, and ensuring that AI systems are designed and trained using high-quality datasets. Providers must also implement mechanisms for human oversight and maintain a system for logging and monitoring AI system performance.
Users of high-risk AI systems also bear significant responsibilities under the EU AI Act. They must operate the AI systems according to the instructions provided by the system providers and report any serious incidents or malfunctions to the relevant authorities. Users are expected to take appropriate measures to ensure that the AI systems are used ethically and do not pose risks to individuals' rights or safety. Additionally, users must cooperate with regulatory authorities during inspections and audits, providing access to necessary documentation and logs.
Importers and distributors play important roles in ensuring that AI systems entering the EU market comply with the Act. Importers are responsible for verifying that the AI systems they bring into the EU meet all compliance requirements. They must ensure the providers have conducted the mandated assessments and have the required documentation. Distributors must ensure that AI systems are not compromised during transportation and storage. Both importers and distributors need to keep records and cooperate with authorities to ensure that only compliant AI systems are available in the market.
AI Systems Obligations
Each type of risk category carries specific requirements for the governance and oversight of AI systems. Organizations are required to manage every one of their AI systems based on the corresponding risk categories. These requirements span governance and control areas, including technology management and human oversight. In addition, organizations must disclose when AI is being used, such as when AI chatbots are used for customer support.
For high-risk AI systems, the Act imposes several stringent requirements. These requirements are designed to ensure that organizations maintain proper governance and oversight of their AI systems. The requirements for high-risk AI systems include:
- Risk Management: Organizations must implement broad risk management technology solutions to identify and mitigate potential risks associated with high-risk AI applications.
- Data Governance: High-quality data and training must be used to develop AI models to minimize biases and inaccuracies, and high-quality data must be maintained throughout the AI lifecycle.
- Technical Documentation: Comprehensive technical documentation is required to comply with regulatory standards.
- Human Oversight: Certain measures are required to ensure that human oversight can intervene in the AI system's decision-making process, if necessary.
- Robustness and Accuracy: AI systems must be designed to deliver accurate, robust, and secure results relative to the risks they pose.
The Act contains additional requirements, and each requirement contains more specifics about the treatment of AI systems. One of the noteworthy requirements is that some AI systems must adhere to specific disclosure obligations for transparency. These requirements include the following:
- User Awareness: Users must be informed that they are interacting with an AI system. This is crucial for chatbots and other AI-driven customer service applications.
- Disclosure of Capabilities: The AI system's limitations and capabilities should be clearly communicated to prevent misunderstandings or misuse.
The Act and Legal and Compliance Professionals
Organizations with any operations involving the EU need to incorporate the EU AI Act into their compliance programs. The financial penalties have garnered headlines, with penalties up to €35 million or 7 percent of worldwide annual gross revenue. While the Act is now in effect, the EU is rolling it out over two years to give organizations time to adhere to it. However, legal and compliance professionals should focus immediately on their compliance with this Act, not only to become compliant with their current AI technology but also to enable themselves to continue their AI development with compliance incorporated into their processes.
Legal and compliance professionals should take several proactive steps to ensure their organizations comply with the EU AI Act. Organizations should incorporate the steps to augment and enhance their existing compliance and internal audit programs. Most requirements are not novel vis-à-vis traditional technology compliance requirements, apart from several considerations unique to AI (e.g., AI transparency and bias).
Below are the recommended approaches for understanding the EU AI Act and your organization's compliance:
- Conduct a Thorough EU AI Act Review: Perform a thorough review of the details and requirements of the EU AI Act to understand what obligations your organization has, if any.
- Identify and Classify AI Systems: Conduct an AI review to document all systems that contain AI—even if your organization did not develop the systems AI—and classify each according to the EU's definitions.
- Audit Existing AI Systems: Conduct thorough audits of existing AI systems to measure them according to the Act's risk category requirements and implement necessary compliance measures.
- Develop Compliance Programs: Establish comprehensive AI compliance programs that include risk management, data governance, and documentation procedures.
- Review Policies and Notifications: Update internal policies and external disclosures, if necessary.
- Training and Awareness: Train staff and stakeholders on the requirements and implications of the EU AI Act to advance your organization's compliance culture.
Conclusion
The EU AI Act establishes a rigorous and structured framework for regulating AI technologies, emphasizing risk management, transparency, and accountability. Legal and compliance professionals are vital in navigating this regulation to ensure their organizations use and deploy AI responsibly and ethically. While we are currently in the early stages of AI compliance, organizations are best served to implement comprehensive AI compliance programs to best serve their stakeholders and mitigate financial, technological, ethical, and reputational risks.
Footnotes
1. European Parliament, “REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL,” https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689, June 13, 2024.
2. European Parliament, “Article 99: Penalties,”https://www.euaiact.com/article/99, July 31, 2024.
3. European Parliament, “EU Artificial Intelligence Act, Article 3: Definitions,” https://artificialintelligenceact.eu/article/3/#:~:text=An%20AI%20system%20is%20a,that%20uses%20an%20AI%20system., June 13, 2024.
4. European Parliament, “EU Artificial Intelligence Act, Article 3: Definitions,”https://artificialintelligenceact.eu/article/3/#:~:text=An%20AI%20system%20is%20a,that%20uses%20an%20AI%20system., June 13, 2024
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.