The European Union's (EU) proposal for a regulation on harmonised rules on artificial intelligence (AI)1 was published back in April 2021 by the European Commission, initiating the ordinary legislative procedure as per Article 294 of the Treaty on the Functioning of the European Union (TFEU).
The EU AI Act was not the first initiative to regulate AI. In addition to regulatory attempts and numerous documents outlining principles for AI, in 2019, the Council of Europe established the Ad-Hoc Committee on Artificial Intelligence (CAHAI), which completed its mandate in 2021 and was succeeded by the Committee on Artificial Intelligence (CAI) to explore the feasibility of an international instrument. However, the EU AI Act emerged as the pioneering legal instrument in AI regulation over time due to various reasons, including the so-called Brussels Effect as well as the EU's intention to make the EU AI Act in AI regulation what the General Data Protection Regulation (GDPR) is in data protection. This article will first explore the current situation of the AI Act. Subsequently, it will briefly explain the Act's general structure. Finally, the practical implications of the Act will be considered, taking the effects of the Act on AI systems in non-EU states into account.
The legislative path of the EU AI Act
The EU AI Act was contemplated as a regulation, meaning that, if adopted, it would automatically be binding for all EU Member States, without requiring additional national legislation. However, it is not yet binding.
At the time of writing, the EU AI Act was in the final stages of the EU's ordinary legislative procedure. Similar to many other major legislative pieces concerning the digital domain, the AI Act has gone through an extensive Trilogue procedure with the involvement of European co-legislators –the Council of the EU and the European Parliament– under the moderation of the Commission. A political agreement was reached between the Council and the Parliament on 9 December 2023.2 As the latest step at the time of writing, the draft, which was prepared in accordance with the political agreement, was approved by the European Parliament on 13 March 2024, which followed prior endorsements by the Council of the EU Coreper I on 2 February 2024 and by the IMCO-LIBO committees of the European Parliament on 13 February. Next step is another approval by the European Parliament of the text after the lawyer-linguists involvement and, subsequently, the final seal by the Council of the EU. The Act is expected to be adopted and published in the Official Journal of the EU before the European Elections, scheduled to take place in June. In the remaining sections of this article, reference is made to the latest text endorsed by the Coreper.3
What does the EU AI Act bring?
The core subject of the AI Act is AI systems. The Act defines an AI system as 'a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' This is a broad definition, leaving room for interpretation on whether a given system is an AI system for the purposes of the Act. As explained below, systems falling under the purview of the Act may be subjected to strict legal as well as technical requirements. Hence, creating an AI inventory and identifying AI systems correctly is crucial for AI developers, firms, and users.
The EU AI Act adopts a risk-based approach and classifies AI systems according to their risk levels. With the assumption that some AI applications pose unacceptable risks, the Act prohibits some AI use cases, such as manipulative uses of AI and the use of AI systems in social scoring. It then classifies some other AI systems as high-risk AI systems, which are subject to a set of stringent technical requirements and legal obligations. It should be noted that the high-risk AI systems (HRAI) include certain quite common AI systems, such as AI-powered CV monitoring systems and the use of AI systems in biometric identification. The Act presumes that the remaining systems pose minimal risk and thus it does not impose mandatory requirements thereon.
In addition to the risk-based approach, the Act regulates two other classes of AI systems. First, the Act introduces a set of transparency obligations for the providers or deployers of AI systems interacting with natural persons. Second, it creates another two-layered risk-based classification for AI models that are capable of being used for various tasks and downstream applications, which are referred to as general purpose AI (GPAI) models. Large language models or image generation models are examples of GPAI models. Among these, depending on the computing power used to train the GPAI model, the Act classifies some as GPAI models with systemic impact and provides more stringent obligations for the providers of these models. Currently, very few models exceed this threshold, the most popular of which are OpenAI's GPT-4 and Google's Gemini.
How will the AI Act be enforced?
The EU AI Act is quite complex legislation with different classifications, technical requirements, conformity assessment procedures, monitoring obligations, and hefty penalties of up to EUR 35 million. Like the GDPR, not only natural and legal persons developing, deploying, and using AI systems but also the Member States and the enforcement bodies will need some time to actually comply with the Act. Recognising this need, the Act comes with a gradual application timeline and a general 2-year grace period starting from its entry into force. There are exceptions to this general grace period. The provisions on prohibitions have a 6-month grace period, whereas the provisions on notified bodies, GPAIs, and penalties have a 12-month grace period. On the other hand, the provisions on requirements for HRAIs associated with the Union harmonisation legislation listed under Annex II of the Act will become applicable 36 months after the Act's entry into force.
Just like its application, the Act's implementation shall be gradual as well, and it will start before the application. The Commission will promote early voluntary commitment to the rules and principles of the AI Act with the participation of the industry under a scheme called the AI Pact.4 In fact, the implementation steps have already started to take place. The Commission established the AI Office, a function of the Commission to which certain tasks are delegated under the Act, on 24 January 2024, even before the official adoption of the Act.5
What is the significance of the AI Act for AI operators and users in non-EU states?
The AI Act is a European legislation with global implications for two main reasons. Firstly, the AI Act will have an extraterritorial effect as its scope covers providers of AI systems located outside the EU if their AI system is placed on the market or put into service within the EU, or if the outputs of the AI system are used in the EU. Secondly, it is expected to emerge as the golden standard for AI regulation influencing further regulatory attempts. Lastly, it should be noted that compliance with complex regulatory frameworks such as the AI Act takes time. With adoption and early commitment schemes on the horizon, it can be anticipated that discussions regarding the AI Act and its implementation will become more prevalent, along with significant regulatory questions in the near future. It is important for all companies involved in developing, deploying, using, or engaging with the supply chain of AI systems in any capacity to start their compliance efforts as soon as possible and closely follow the developments in this field.
Footnotes
1. European Commission, 'Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts' (COM/2021/206 final, Document 52021PC0206) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.
2. European Parliament, 'Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI' (Press Release, 9 December 2023) https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
3. Council of the European Union, ST 5662 2024 INIT, Note (26 January 2024) https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf.
4. European Commission, 'AI Pact' https://digital-strategy.ec.europa.eu/en/policies/ai-pact.
5. European Commission, 'Commission Decision Establishing the European AI Office' (Policy and Legislation, 24 January 2024) https://digital-strategy.ec.europa.eu/en/library/commission-decision-establishing-european-ai-office#:~:text=A%20European%20Artificial%20Intelligence%20Office,to%20its%20annual%20management%20plan.
Originally published 15 April 2024.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.