ARTICLE
11 January 2024

An Introduction To The EU's Artificial Intelligence Act

AC
Ankura Consulting Group LLC

Contributor

Ankura Consulting Group, LLC is an independent global expert services and advisory firm that delivers end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation. Ankura consists of more than 1,800 professionals and has served 3,000+ clients across 55 countries. Collaborative lateral thinking, hard-earned experience, and multidisciplinary capabilities drive results and Ankura is unrivalled in its ability to assist clients to Protect, Create, and Recover Value. For more information, please visit, ankura.com.
On December 8, 2023, European Union (EU) lawmakers reached an agreement on the EU's AI Act. The EU AI Act has many similar themes to the EU's General Data Protection Regulation (GDPR)...
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On December 8, 2023, European Union (EU) lawmakers reached an agreement on the EU's AI Act. The EU AI Act has many similar themes to the EU's General Data Protection Regulation (GDPR) and reflects a big step forward in the governance of AI.

This article provides an overview of the EU's AI Act focused on applicability and thresholds. Subsequent articles in this series will focus on implementing an AI risk management process and obligations that providers and users of AI tools have pursuant to the EU AI Act.

Applicability, Timing, and Penalties

The EU AI Act applies to a) providers who place on the market or put into service an AI system in the EU, irrespective of whether those providers are established within the EU, b) users of AI systems located within the Union, and c) providers and users of AI systems that are located outside of the EU, where the output of the produced systems is used in the EU.1

Similar to GDPR, developers of AI systems that target the EU market will be subject to the EU AI Act, but most multinational corporations utilizing AI will be drawn into the EU AI Act. For example, if a U.S. headquartered organization is utilizing AI in the context of making employment decisions related to employees in the EU, that U.S. organization will be subject to the EU AI Act. As described in our section below titled Threshold, only AI systems deemed high risk are subject to the EU AI Act requirements.

We anticipate a 24-month transition period with the EU AI Act expected to be adopted in the Spring of 2024 and enforced in 2025.

The fines for non-compliance with the EU AI Act are similar to those seen under the GDPR but can be higher in certain circumstances. If the offender is a company, non-compliance for prohibited AI violations could result in fines of up to 30 million EUR or 6% of total revenue from the preceding year, whichever is higher.2 Most other violations could result in fines up to 20 million EUR or 4% of global annual turnover.

Threshold

The EU's Act follows the principles of proportionality whereby systems that pose a high risk to the rights and safety of individuals must follow the requirements of the EU's AI Act. Specifically, the EU's AI Act differentiates between AI systems that result in (i) an unacceptable risk, (ii) a high risk, and (iii) a low or minimal risk. Unacceptable risks include those related to subliminal techniques leveraged to exploit vulnerabilities of specific vulnerable groups such as minors or persons with disabilities.3

Per the EU AI Act, high-risk AI systems include those related AI systems related to the following:4

  1. Biometric identification and categorization of individuals
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, works management, and access to self-employment
  5. AI systems intended to be used by public authorities to evaluate the eligibility of individuals for public assistance
  6. Law enforcement
  7. Border control management
  8. Administration of justice and democratic processes

From a commercial perspective, we expect the most common high-risk AI systems will be centered around education, security (facial recognition), and the employment/recruiting function, especially for multinationals based outside the EU. Modern systems utilized by utilities to manage infrastructure will also deserve significant attention under the EU AI Act.

AI systems determined to be high risk are subject to many compliance requirements under the EU AI Act. Specifically, Chapters 2 and 3 of the EU AI Act contain roughly 25 pages of requirements focused on the following:5

  1. Implementing a risk management system
  2. Data and data governance
  3. Technical document
  4. Record-keeping
  5. Transparency and provisions of information to users
  6. Human oversight
  7. Accuracy, robustness, and cybersecurity

Our next article in this series will focus on the practical aspects of implementing an AI risk management system as required under the EU's AI Act. We will then focus on the responsibilities of users of AI as required by the EU AI Act.

Footnotes

1. EU AI Act Article 2 - Scope.

2. EU AI Act Article 71 - Penalties.

3. EU AI Act Section. 5.5.2. Prohibited Artificial Intelligence Practices.

4. EU AI Act. Annex III.

5. EI AI Act Chapter 2 - Requirements for High-Risk AI Systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More