On Wednesday, March 13, the European Parliament approved the regulation harmonizing rules on artificial intelligence (AI) (the AI Act).

Stakeholders must comply with the AI Act due to its global reach, when it takes effect this year.

Global scope — The AI Act will be applicable to all providers, manufacturers, importers, distributors and deployers of systems integrating AI that are established in the EU, or, if registered outside the EU, that market their AI system or model in the EU.

Each AI system or model will have to comply with the AI Act. Therefore, a company using multiple systems or models integrating AI will have to conduct a separate review of each of them for compliance with the AI Act.

Time frame — The AI Act has been formally adopted by the EU Parliament and must be endorsed by the EU Council. It will come into force within 20 days after its publication in the Official Journal of the EU. As it is a regulation, it will be directly effective in all Member States and will not have to be transposed into national law.

The provisions of the AI Act will come into force progressively, according to the following interim timetable:

1440758a.jpg

New requirements — Asdetailed in our client alert of Dec. 19, 2023, the obligations vary according to the level of risk of the AI system:

  • Prohibited AI systems: These include all systems whose use is considered contrary to the values of the EU and that are therefore strictly prohibited. This prohibition concerns AI applications that lead to subliminal manipulation, biometric categorization of persons based on sensitive characteristics, real-time remote biometric identification, exploitation of the vulnerabilities of persons resulting in harmful behavior, etc.
  • Fines for noncompliance are as high as 35 million euros or 7% of the company's worldwide annual sales.
  • High-risk AI systems: These include all systems that create a high risk to the health, safety or fundamental rights of individuals. Two types of products are concerned:
    • AI systems used in products falling under the EU's product safety legislation (including toys, aviation, cars, medical devices and lifts)
    • AI systems falling into one of eight specific areas, namely (i) biometric identification and categorization of individuals, (ii) critical infrastructures, (iii) education and vocational training, (iv) employment, workforce management and access to self-employment, (v) access to and enjoyment of essential private services and public services and benefits, (vi) law enforcement, (vii) migration, asylum and border control management and (viii) administration of justice and democratic processes.

The deployment of high-risk AI systems is strictly regulated by the AI Act.

Such systems will have to be assessed by the EU AI Office, to obtain a declaration of conformity, to be registered in an EU database and be CE marked before being marketed.

Fines for non-compliance are as high as 15 million euros or 3% of the company's worldwide annual sales.

  • Limited-risk AI systems: These include all systems that interact with humans and that are not considered an unacceptable risk or high-risk AI system (e.g., chatbots, AI-generated content not falling under other categories).

Users must be informed that the content to which they have access is generated by AI, for transparency reasons.

Fines for noncompliance are as high as 7.5 million euros or 1% of the company's worldwide annual sales.

  • Minimal-risk AI systems: These include all AI systems that do not fall under any of the above-mentioned categories (e.g., spam filters, recommender systems). AI systems considered to be minimal risk do not have any restrictions or obligations. However, it is advisable to set up a code of conduct with respect to use of such systems.

Next steps — In order to provide some flexibility in the regulatory process and take into account technological developments, some provisions of the AI Act will be clarified, notably in order to designate the national authorities responsible for monitoring and controlling the correct application of the regulation, or for imposing sanctions.

The European Commission is expected to issue guidance on various topics and delegated acts within five years, particularly on the definition of AI systems, criteria and use cases for "high-risk" AI, thresholds for general-purpose AI models with systemic risk, technical documentation requirements for general-purpose AI, conformity assessments, or EU declarations of conformity.

AI regulatory sandboxes will also be implemented at a national level and will be operational 24 months after entry into force, with the notable aim of providing guidance on regulatory expectations and how to fulfill the requirements and obligations set out in the AI Act.

The EU AI Office is to provide advice on the implementation of the new rules, in particular as regards GPAI models, and to develop codes of practice to support the application of the AI Act. All of these further developments will have to be closely monitored for full compliance with this new regulatory framework.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.