On 21 April 2021, the EU Commission published a proposal for a Regulation on Artificial Intelligence (draft AI Regulation, proposed Regulation).1 Being part of the "European strategy for data,"2 the draft AI Regulation purports to set a gold standard for regulating AI within the Union.3 While at the time being, the proposed Regulation is being debated and there are several thousands of proposals for amendments, it is expected to be adopted by the fall of 2022.4 Once adopted, the AI Regulation will enter into force twenty days after its publication and will leave a 24-month transition period for all organizations that provide or use AI systems within the EU to implement the respective measures and obligations.

This article provides an overview of the draft AI Regulation and its implications for organizations around the world.

1.1 Scope of Application

The draft AI Regulation will have GDPR-style extraterritorial reach, and applies to (Art. 2):

  • providers that market in the EU or put AI systems into service in the EU;
  • users of AI systems in the EU; and
  • providers and users of AI systems whose output is used within the EU.

The draft AI regulation defines the term "provider" as a person that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark (Art. 3). It should be noted that any distributor, importer, user or other third-party will be considered a provider if it (Art. 28):

  1. puts an AI system on the market under its own name or trademark;
  2. modifies the intended purpose of an existing AI system; or
  3. makes substantial modifications to the AI system.

The term "user" is defined as a person using an AI system under its authority, except where the AI system is used in the course of personal non-professional activity (Art. 3).

Notably, the draft AI Regulation provides for a horizontal regulatory framework.5 Existing regulations, laws, standards, norms, etc. of technologies in most cases apply to specific industries and do not address the implementation of these technologies through hardware and software systems. The proposed Regulation will attempt to regulate AI horizontally and thus independently of use cases.6

1.2 Definition of AI

The proposed Regulation does not set out a definition of AI. Instead, it provides a definition of AI systems (Art. 3):

  • "[A]rtificial intelligence system" (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

The list of techniques and approaches in Annex I includes machine learning approaches, logic, and knowledge-based approaches as well as statistical approaches.

The overly broad definition of AI systems in the draft AI Regulation has gained criticism from various stakeholders. While it remains to be seen what the final version of the draft AI regulation will look like, some states proposed a narrower definition of artificial intelligence, with two additional requirements: that the system is capable of "learning, reasoning, or modeling implemented with the techniques and approaches", and that it is also a "generative system," directly influencing its environment.7

1.3 Types of AI Systems and Relevant Compliance Requirements

The draft AI Regulation divides AI systems into four categories: unacceptable-risk AI systems, high-risk AI systems, AI systems with transparency obligations, and minimal-risk AI systems.

Unacceptable-risk AI systems (Art. 5): For instance, AI practices that contravene EU values and pose a clear threat to people's safety, livelihoods, and fundamental rights are considered unacceptable-risk systems. Such systems are not permitted under the draft AI Regulation.

High-risk AI systems (Art. 6): The following categories of AI systems fall under this category:

  1. Biometric identification systems;
  2. Critical infrastructures that could put the life and health of citizens at risk;
  3. Educational or vocational training, that may determine the access to education and professional course of someone's life;
  4. Safety components of products;
  5. Employment, management of workers, and access to self-employment;
  6. Essential private and public services;
  7. Law enforcement, administration of justice, and democratic processes;
  8. Migration, asylum, and border control management.

Providers of high-risk AI systems are subject to the following substantive obligations:

  1. Create and implement an adequate risk management and mitigation system (Art. 9);
  2. Diligently manage data and data governance including training and testing high-risk AI systems using data (Art. 10);
  3. Create and provide users with technical documentation (Art. 11);
  4. Provide detailed record-keeping for authorities (Arts. 12, 20);
  5. Accommodate users with clear and adequate information (Art. 13);
  6. Provide an appropriate level of human oversight (Art. 14);
  7. Take care of accuracy, robustness, and cybersecurity (Art. 15);
  8. Put in place a proper quality management system (Art. 17);
  9. Conduct post-market monitoring on the performance of high-risk AI systems (Art. 61).

Providers of high-risk AI systems are also required to comply with the following procedural obligations:

  1. Providers should ensure that their high-risk AI systems complete the so-called "conformity assessment" before they can be offered on the market or put into service (Art. 19). Conformity of the biometric identification systems is to be conducted by the relevant "notified body" (Annex VII). For all other types of high-risk AI systems, providers can carry out self-assessment (Annex VI);
  2. Once the high-risk AI system goes through the conformity assessment, providers should draw up the written declaration of conformity (Art. 48), attach a CE marking of conformity to the high-risk AI system documentation (Art. 49), and register the high-risk AI system in the EU database (Art.51);
  3. Providers should report to the relevant authority within 15 days of a "serious incident" or "malfunctioning" of the high-risk AI system (Art. 62).

As opposed to providers, users of high-risk AI systems are subject to limited requirements. By extension, they have to ensure that they use the high-risk AI systems in accordance with instructions for their use and monitor the operation of the high-risk AI systems and keep a record of the logs generated by the high-risk AI systems (Art. 29).

AI systems with transparency obligations (Art. 52): Chatbots or impersonator bots fall under this category. Providers of such systems must ensure that they are designed and developed in such a way that natural persons are informed that they are interacting with an AI system.

Minimal-risk AI systems (Art. 69): AI-enabled video games or spam filters fall under this category. There are no compliance requirements for such AI systems. The vast majority of AI systems currently used in the EU can be defined as minimal-risk AI systems.

1.4 Sanctions

Infringement of obligations under the draft AI Regulation results in fines quantified according to the type of infringement (Art. 71).

The violator (provider and/or user where applicable) will be subject to administrative fines of up to EUR 30,000,000 or, if the offender is a company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, for the following two types of infringements:

  • the placing on the market, putting into service, or use of unacceptable risk AI practices (Article 5);
  • non-compliance with the data governance requirements of the high-risk AI system (Article 10.4).

For non-compliance with other requirements and obligations than the ones mentioned, the violator shall be subject to administrative fines of up to EUR 20,000,000 or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

The draft AI Regulation also provides for sanctions for failure to provide correct, complete, or accurate information to notified bodies and national competent authorities in response to any request. If this is the case, the violators shall be subject to administrative fines of up to EUR 10,000,000 or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

1.5 Implications for Business

The draft AI Regulation sets the bar high for compliance obligations of industries in the field of AI in all economic sectors. A global survey conducted by McKinsey shows that many organizations are well behind in meeting the potential compliance requirements of the upcoming Regulation since only 38 percent of them are actively addressing regulatory-compliance risks in the field of AI.8 In this respect, the affected organizations should prepare themselves for the regulations that are sure to follow and adopt a strategic compliance program for their AI systems. This will require necessary skill sets and resources leading to a heightened role of compliance professionals. Companies may be expected to expand their compliance teams and hire more professionals with this specific background. There will be an increased need for outside legal counsel as well.

Footnotes

1. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts, COM (2021) 206 final of 21 April 2021- 2021/0106 (COD).

2. 'A European Strategy for Data' (Shaping Europe's digital future, 2022) https://digital-strategy.ec.europa.eu/en/policies/strategy-data accessed 14 July 2022.

3. 'A European Approach To Artificial Intelligence' (Shaping Europe's digital future, 2022) https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence accessed 14 July 2022.

4. 'Europe's Artificial Intelligence Debate Heats Up | CEPA' (CEPA, 2022) https://cepa.org/europes-artificial-intelligence-debate-heats-up/ accessed 14 July 2022.

5. The draft AI Regulation (n 1) Explanatory Memorandum. Para 1.1.

6. Patrick Glauner, 'An Assessment of the AI Regulation Proposed by the European Commission' forthcoming in Sepehr Ehsani, Patrick Glauner, Philipp Plugmann and Florian M. (eds), The Future Circle of Healthcare: AI, 3D Printing, Longevity, Ethics, and Uncertainty Mitigation (Springer 2022) 4.

7. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 29 November 2021-2021/0106(COD).

8. 'The State of AI In 2020' (Global Survey The State of AI in 2020, 2022) https://www.mckinsey.com/business-functions/quantumblack/our-insights/global-survey-the-state-of-ai-in-2020 accessed 14 July 2022.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.