ARTICLE
9 September 2024

AI Act : How Will The EU Regulate AI Systems ?

The proposal for a regulation to regulate artificial intelligence systems in a harmonised way was presented to the European Commission in April 2021. Following an institutional trialogue, the text was adopted...
France Technology

The proposal for a regulation to regulate artificial intelligence systems in a harmonised way was presented to the European Commission in April 2021.

Following an institutional trialogue, the text was adopted on 8 December 2023 and technical adjustments to ensure consistency were made in January 2024, resulting in a finalised version on 26 January 2024.

This regulation, which is part of the Digital Agenda for Europe, aims to provide a framework for one of the most disruptive technologies of recent years (or even decades), and is noteworthy in several respects.

Contrary to the criticisms usually levelled at texts aimed at regulating a posteriori technologies or uses that have already been widely deployed in practice, this European initiative will be the first large-scale binding text on artificial intelligence to be adopted anywhere in the world.

In comparison, the United States has superficial guidelines (Blueprint for an Artificial Intelligence Bill of Rights) and non-binding executive orders.

This proposal for a regulation is based on an approach based on the risks inherent in the technologies and uses it aims to regulate.

Definition and scope

An AI system is defined as "a machine-based system, designed to operate with different levels of autonomy, which can demonstrate adaptability after deployment and which, for explicit or implicit purposes, infers from the data it receives how to generate results such as predictions, content, recommendations or decisions that can influence physical or virtual environments" (art.3). Software systems based solely on rules defined by humans are expressly excluded from the scope of the regulation.

According to this definition, one of the main characteristics of AI systems is their capacity for inference.

This inference refers to the process of obtaining results, such as predictions, content, recommendations or decisions, which can influence physical and virtual environments, and to the ability of AI systems to derive models and/or algorithms from inputs/data. Techniques that enable inference when building an AI system include machine learning approaches that learn from data how to achieve certain goals, and logic and knowledge-based approaches that infer from coded knowledge or symbolic representation of the task to be solved. The ability of an AI system to infer goes beyond basic data processing, enabling learning, reasoning or modelling (recital 6).

With the types of regulated systems thus defined, the IA Act produces its effects extraterritorially, since the rules laid down apply to any actor who supplies, distributes, deploys (or even uses) AI systems on the territory of the European Union.

Rules and deadlines for compliance

The content of the regulation imposes different obligations depending on the category to which AI systems belong :

  • AI systems with an unacceptable risk are banned outright (general-purpose social scoring) and incur the highest level of penalties: €35 million or 7% of annual worldwide turnover.
  • High-risk AI systems (processing of biometric data, health, fundamental rights, critical infrastructures, etc.) are subject to restrictive prerequisites, such as registration of the system in a European database and CE marking.
  • AI systems with limited risks (deep fake for artistic purposes, chatbot, etc.) are subject to transparent information obligations regarding the use of the system.
  • AI systems with minimal risks (video games, technical filters, etc.) are invited to voluntarily comply with the obligations set out in the Regulation.

The deadline for compliance with the obligations arising from the Regulation for the players concerned is, in principle, 24 months from the entry into force of the Regulation.

The Member States also have this 24-month period to ensure that these provisions are fully effective on their territory. In particular, they must designate the national authority responsible for monitoring these regulations (the CNIL in France), set up the regulatory sandbox and ensure that sanctions are fully effective.

By way of exception, certain obligations have derogating deadlines:

  • 6 months after entry into force, the prohibitions on unacceptable AI systems take effect.
  • 9 months for the finalisation of the codes of practice for general models (GPAI).
  • The European AI Office must be set up within 12 months.
  • The deadline for compliance is extended to 36 months for high-risk AI systems intended to be used as safety components in a product.
  • AI systems already on the market when the regulation comes into force have 4 years in which to comply with its obligations.

Technological and regulatory developments

Numerous developments in uses and technologies were taken into account during the discussions on the AI Act, in particular generative AI systems and general-purpose language models.

As a result, the latest version of the Act includes new provisions concerning general-purpose AI models, or GPAI, and creates specific horizontal obligations, including technical documentation and risk assessments.

Given that foundation systems present a systemic risk, the latest version of the text sets a quantitative threshold for the classification of IPSMs.
The deadline for compliance with the obligations arising from the regulation for the players concerned is, in principle, 24 months from the entry into force of the regulation.

However, calculation capacity and the development of technologies are increasing exponentially, so the regulation provides for ongoing assessment of the applicable rules and requires the regulation to be updated periodically 3 years after its entry into force, and then every 4 years.

Lastly, clarifications will have to be provided by the dedicated authorities, first and foremost the European Office on AI, to enable an operational understanding of the regulatory requirements.
Bringing players into compliance will therefore require them to take account of the regulatory constraints of the AI Act, as well as the many other national and European imperatives, right from the design stage of AI systems, so as not to hold back innovation while controlling usage.

Originally published 10 April 2024

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More