I. Introduction

The legislative process for the adoption of a comprehensive law specifically devoted to the regulation of artificial intelligence ("AI") in the European Union ("EU") is coming to a close. On February 13, 2024, the Internal Market and Civil Liberties Committees of the European Parliament voted with a clear majority in favor of the version of the Artificial Intelligence Act (the "EU AI Act" or the "Act") on which EU trilogue institutions (the European Parliament, Council of the European Union and European Commission) reached a political agreement in December 2023 (see our previous blog).1

The EU AI Act's final version has yet to be published, but Euractiv journalist Luca Bertuzzi shared a leaked version of the text in a LinkedIn post on January 22, 2024.2 After two years of debates, then, we have in our hands the rules that will soon govern AI in the EU.

At first glance, Canadian businesses may not feel concerned about legislative developments on the other side of the Atlantic. This assumption may prove costly: like the General Data Protection Regulation (GDPR) which forced many Canadian businesses to update their privacy practices, the EU AI Act will also have extraterritorial effects. It will force companies exporting AI-enhanced regulated products or systems used in high-risk areas in the EU to follow complex sets of new compliance rules. The EU AI Act will also affect companies offering online services that have AI components and are accessible to EU consumers (for example, an e-commerce retailer with an AI-based chat-bot on its website that transacts with European consumers). The extent of this "Brussels effect" in this case remains uncertain.3 Nevertheless, Canadian businesses should not discount the EU AI Act's influence on upcoming Canadian AI laws (including the draft Artificial Intelligence and Data Act ("AIDA"), and a potential Quebec AI law recently recommended by the Conseil de l'Innovation du Québec ("CIQ")). In other words, this is a GDPR-like moment.

In this blog, we update our previous analysis of older versions of the EU AI Act published here and here based on the leaked version of the EU AI Act, while considering how this new law may affect Canadian businesses operating in Europe.

II. Purpose and Scope

The EU AI Act aims to improve the functioning of the EU's internal market with a uniform legal framework regulating the development, placement on the market, putting into service and use of AI systems in the EU in conformity with EU values.4 It promotes a human-centric approach with a focus on trustworthy AIs that can safeguard health, human safety and the fundamental rights enshrined in the Charter of Fundamental Rights of the European Union (the "EU Charter"), and mitigate AI's potential harmful effects.5

To do so, EU lawmakers introduce minimum standards to address AI risks without, they hope, unduly hindering innovation. The EU AI Act is designed to be both (i) tailored to the risk levels associated with specific AI systems and areas of use while (ii) promoting beneficial AI-based innovation.

The EU AI Act, through its reference to the EU Charter, covers a broad array of fundamental rights from the right to life and integrity of the person to political (ex: freedom of expression, of conscience) and social rights (ex: worker's rights). The EU AI Act goes a step further, contemplating societal harm such as serious and irreversible disruption to critical infrastructure and serious damage to the environment.6 Concretely, this means that compliance with the EU AI Act will require businesses to factor in a larger set of potential risks when complying with their obligations to identify and analyze the foreseeable risks of their systems.7

Finally, the EU AI Act applies to uses of AI by public authorities, agencies and other government bodies, such as those used to evaluate eligibility for essential public services or used to conduct predictive policing (both considered as high-risk).8 However, the Act has an important exception as it does not apply to the use of AI in the military or intelligence context.

III. (The Challenge of) Defining AI

The general public's conception of "AI" has evolved markedly since the introduction of the first iteration of the EU AI Act by the European Commission (the "Commission") on April 21, 2021, thanks to ChatGPT and the emergence of large language models ("LLMs") in late 2022. The daily interactions of the public with systems that seem to surpass the Turing test has changed our expectations of the power of thinking machines. As we have discussed elsewhere, this evolution forced the EU to draft anew certain portions of its AI regulation, including its definition of AI.

The EU AI Act initially defined AI systems as software developed through expressly identified techniques including: (i) machine learning approaches, (ii) logic and knowledge-based approaches and (iii) statistical approaches.9 According to the leaked version of the final draft, the EU AI Act now applies to "artificial intelligence systems", defined as machined-based systems that "operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infer, from the input [they] [receive], how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."10 This new definition, by removing references to specific techniques, adopts a more technologically neutral approach which should help the Act stay relevant through technological changes. Adding the notion of autonomy also helps make clear the Act governs modern AI systems rather than traditional software systems.11

"Autonomy" may prove a difficult concept to apply in practice. The Act partially sidesteps the issue by referring to "varying levels of autonomy", but this still leaves opens the question of what minimum level of autonomy subjects an AI system to compliance obligations under the EU AI Act.

Yet, because of its new technology-neutral approach, regulators and courts may find enough flexibility in this definition of AI to include software systems with advanced capabilities, regardless of whether they are commonly classified as "artificially intelligent", which adds a great deal of uncertainty to the scope of application of this legal regime.

IV. Regulating Risk

A. A Tiered, Risk-Based Approach

Despite a broad scope and definition of AI systems, the EU AI Act reserves its main obligations to only a subset of AI systems in order to balance the promotion of investment and innovation with the mitigation of risks. Specifically, the EU AI Act provides for three sets of rules tailored to the level of risk associated with an AI system: (i) unacceptable risk, (ii) high-risk and (iii) limited risk. For more on the EU AI Act's tiered approach, see our colleague Barry Sookman's previous blog. Since its initial iteration, the EU has supplemented this risk-based approach with specific rules for the regulation of GPAI models, including those posing "systemic risks."12 For more on the regulation of GPAI models, see the General Purpose AI section below.

B. Prohibited AI Practices

The EU AI Act bans the placing on the EU market, the putting into service and the use of specific AI systems associated with AI practices that carry unacceptable risks and that contradict the EU's fundamental value. Article 5 of the Act expressly lists those systems:

  • Subliminal manipulation: AI systems that deploy hidden subliminal or deceptive techniques to manipulate an individual or a group's into making decisions that could cause them significant harm.13
  • Exploitative manipulation: AI systems that exploit vulnerabilities of an individual or a group due to age, disability or social or economic conditions in a manner that materially distort their decision making in a manner that could cause them significant harm.14
  • Biometric categorisation: AI systems that categorize individuals based on biometric data to determine if they belong to a protected class.15
  • Social scoring: AI systems that evaluate individuals over time based on their social behavior and that produce a score used to the detriment of individuals or groups in a manner that is unrelated to the context where the behaviour data was generated or that is unjustified or disproportionate.16
  • Crime prediction: AI Systems that risk assessment on individuals to assess or predict the probability an individual commits a crime based solely on profiling or assessment of individual traits.17
  • Untargeted scraping: AI systems that create or expand facial recognition databases based on scraping of images from the internet or CCTV footage.18
  • Emotion recognition: AI systems that recognize the emotions of individuals based on biometric data in the workplace and educational contexts, except for medical or safety reasons.19

In addition, the EU AI Act also prohibits the use by the police of real-time remove biometric identification systems in public spaces, except in narrow circumstances such as the prevention of terrorist attacks if certain procedural steps are followed.20

This list has significantly evolved during the EU's trilogue process, with biometric categorisation, crime prediction, untargeted scraping and emotion recognition practices being added to the original draft. Although some practices like the use of real-time biometrics concern solely public actors, Canadian businesses should carefully review this list before putting on the EU market or using in the EU any AI system, as non-compliance will trigger the Act's harshest penalties: administrative fines of up to the higher of 35 million euros or 7% of total worldwide annual turnover (even higher than those in the GDPR).21The rules regarding prohibited AI practices and penalties will respectively begin to apply six and twelve months after the entry into force of the Act (likely in April-May of this year).

The descriptions of some prohibited practices are also surprisingly broad, with profound consequences on certain existing and emerging industries. For instance, emotion recognition is a highly researched area in AI, with more than 80 patents filed for workplace emotion recognition systems filed in the United States alone, mostly since 2015.22 Presumably, most of these inventions will not be permitted, regardless of the actual level of risk of a specific system or use case. For the EU, there is "serious concerns about the scientific basis" of emotion recognition AI systems and the power imbalance that individuals face in employment and educational contexts and this justifies an outright ban, with the exception of systems used strictly for medical or safety reasons.23Moreover, the prohibition of those AI practices is not circumscribed to cases of actual or likely significant harm like in the case of subliminal and exploitative manipulations.

The practices listed in Article 5 are however not set in stone: each year, the European Commission will assess the need to amend the list and submit amendments based notably on the development of AI technology.24

C. High-Risk AI Systems

Most AI systems will not meet the definitions and thresholds to be prohibited under the EU AI Act, but will still carry an important level of risk. We discuss in this section what are those AI systems.

In the EU AI Act, "high-risk" systems are (i) AI systems used as products or safety components of products covered by certain Union harmonization legislations (listed in Annex II of the Act) where such legislations require the products or safety components to undergo third party conformity assessments25, and (ii) AI systems used in certain specific areas (listed in Annex III of the Act).

The table below presents a truncated version of the AI systems uses that the EU AI Act considers "high-risk".

EU AI Act – High-Risk Systems

Area

AI Use

Summary

ANNEX II

1

Product or safety component of a product listed in certain Union harmonisation laws and required to undergo third-party conformity assessment

  • Machinery
  • Toys
  • Recreational craft and personal watercraft
  • Lifts and their safety components
  • Protective systems for use in explosive atmospheres
  • Radio equipment
  • Pressure equipment
  • Cableway installation
  • Personal protective equipment
  • Appliances burning gaseous fuels

Annex II section B also lists other products subject to the Act, but only to limited provisions.

ANNEX III

2

Biometrics, as long as permitted under relevant EU or national law

  • Remote biometric identification systems, excluding those used for the purposes of confirming a natural person's identity.
  • AI systems intended to be used for biometric categorization, based on sensitive or protected attributes.
  • AI systems intended to be used for emotion recognition.

3

Critical infrastructure

  • AI systems intended to be used as safety components for critical digital infrastructure, road traffic and utility management.

4

Education and vocational training

  • AI systems intended to be used to determine an individual's admission to education and training institutions at all levels.
  • AI systems intended to be used for assessing and guiding learners' progress in educational and training settings.
  • AI systems intended to determine individuals' education levels within an education and vocational training institution setting.
  • AI systems intended for monitoring and detecting prohibited behaviour of students during tests in educational settings.

5

Employment, workers' management and access to self-employment

  • AI systems intended for hiring processes of natural persons including targeted job ads, screening applications and applicant evaluation.
  • AI used to make employment decisions, allocate tasks, monitor and evaluate performance based on behaviour or traits.

6

Access to and enjoyment of:

  • essential private services and
  • essential public services and benefits
  • AI systems intended to assist public authorities in the assessment and management of individuals' eligibility for essential public benefits and services.
  • AI systems intended to evaluate credit scores or creditworthiness, excluding those used in financial fraud detection.
  • AI systems intended to sort emergency calls by natural persons, prioritizing and dispatching first responders and triage systems.
  • AI systems intended for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

7

Law enforcement, as long as permitted under relevant EU or national law

  • AI systems intended for use by or on behalf of law enforcement or EU bodies in support of law enforcement:
    • to predict an individual's risk of becoming a crime victim.
    • in lie detection and similar assessments.
    • to assess evidence reliability in criminal investigations.
    • to predict an individual's risk of committing crimes, beyond personal profiling, or to evaluate their personality or past offences.
    • to profile natural persons during the detection, investigation or prosecution of criminal offences.

8

Migration, asylum and border control management, as long as permitted under relevant EU or national law

  • AI systems intended to be used by competent public authorities as polygraphs and similar tools.
  • AI systems used by or on behalf of competent public authorities or by EU bodies to evaluate risks, like security, immigration, or health, associated with individuals entering a member state.
  • AI systems intended to be used by or on behalf of competent public authorities or EU bodies to vet asylum, visa and residence applications, including evidence reliability checks.
  • AI systems intended to be used by or on behalf of competent public authorities, including EU bodies, for migration, asylum and border authorities to identify individuals, excluding travel document verification.

9

Administration of justice and democratic processes

  • AI systems intended to assist judicial authorities in legal research, interpretation and application of the law, or in alternative dispute resolution.
  • AI systems intended to be used for influencing the outcome of elections or referenda or how individuals vote, not including backend campaign logistics tools.


D. General Purpose AI

The EU AI Act includes provisions pertaining to GPAI models. A GPAI model is defined as an "AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications."26

As a novel and particularly significant addition, the EU AI Act creates a narrow carve out for providers that offer GPAI models under a free and open-source license.27 This exclusion, which reduces the burdens placed on developers of AI systems, signals an attempt at achieving balance, namely by simultaneously regulating and fostering AI innovation in the EU. This carve out is likely to be advantageous to many current industry players such as Meta (through its many Llama models), Mistral (through the likes of its 7B and 8x7B open-source LLMs) and more recently Google with its Gemma suite of models. However, the free and open-source license exemption is not absolute in nature. To be sure, the carve out does not apply in instances where the free and open-source model is used for one of the prohibited purposes discussed in Section B above, or if it is put into service as a high-risk AI system.28 The use of copyrighted content within free and open-source models also still requires the authorization of the rights holder unless exceptions apply.29

High impact GPAI models, which exceed capabilities recorded in most advanced GPAIs,30 are subjected to even more demanding obligations. Under the EU AI Act, GPAIs that were trained using more than 10^25 FLOPs are presumed to have high impact capabilities31 and deemed to pose a systemic risk.

V. Compliance Obligations applicable to High Risk AI Systems

The EU AI Act outlines in its Chapter 2 a general set of stipulations high-risk AI systems need to comply with. These requirements are designed to ensure safety, accountability, and transparency.

A. Risk Management System

At its base, the EU AI Act requires providers of high-risk AI systems to establish and maintain a risk management system throughout the entire lifecycle of a high-risk AI system.32The management system needs to be designed to mitigate or eliminate risks that can be reasonably addressed through the development and design processes of the high-risk AI system, or through the provision of comprehensive technical information to downstream actors in the AI-value chain. When implementing such a risk management system, particular consideration should be given to whether the high-risk AI system's intended purpose is likely to adversely impact persons under the age of 18 and other vulnerable groups of people.33

B. Data Governance

The training, validation and testing data sets of high-risk AI systems must adhere to suitable data governance and management practices tailored to the AI system's intended purposes.34 Those obligations found under Article 10 (including obligations for data annotation, labelling, enrichment and aggregation) focus on the quality and representative nature of the data used to the train high-risk AI Systems, notably to ensure that such data is free from bias and appropriate for the context and population in which the system will be deployed.

C. Technical Documentation and Record-Keeping

Technical documentation is required to be drawn up before market placement, to specifically illustrate that the high-risk AI system meets established standards and to provide the relevant authorities with all the necessary information (found in Annex IV) to evaluate whether the AI system is in compliance with the stipulations. Small and medium sized enterprises are permitted to provide the technical documentation in a simplified matter, in what represents a "pro-innovation" approach to an important obligation.35To maintain an appropriate level of traceability for the functioning of high-risk AI systems (according to their intended purpose), high-risk AI systems must also be designed to automatically record events throughout their operational lifespan.36

D. Transparency

High-risk AI systems are required to be designed and developed in a sufficiently transparent manner to enable providers and users to reasonably understand the system's functioning.37 Accordingly, transparency compliance also involves the supplying of instructions for use of the high-risk AI system, according to the requirements listed under Article 13 of the EU AI Act. These obligations aim to address the famous "black-box" problem of AI.

E. Human Oversight

The EU AI Act requires that high-risk systems be overseen by natural persons during the period in which the system is in use, with the goal of preventing or minimizing the risks to health safety and fundamental rights that can potentially emerge.38 It should be noted that a high standard of human oversight is required for systems referred to under Annex II, point 1(a), which pertain to remote biometrics. More specifically, oversight measures must ensure that a deployer does not act on or make a decision based on the system's identification outcomes unless it has been verified and confirmed by at least two natural persons with necessary competence, training and authority.39

F. Accuracy, Robustness and Cybersecurity

High-risk AI systems are required to be designed and developed so as to achieve an appropriate level of accuracy, robustness and cybersecurity.40As the Act has yet to fully come into force, it remains to be seen whether this requirement will influence market evolution in terms of AI-accuracy service level agreements, especially accuracy service level. Finally, AI systems that are high-risk and have been certified or have a statement of conformity under the cybersecurity scheme according to Regulation (EU) 2019/881, and have been mentioned in the Official Journal of the European Union, are presumed to be in compliance with the cybersecurity obligation in article 15 of the EU AI Act.41

VI. The AI-Value Chain and Compliance Obligations

Many different actors are involved in what is generally referred to as the AI-value chain. Different actors have different roles and relations with the AI system and, consequently have various degree of compliance obligations with the stipulations described in the section above. These obligations also vary based on the type of AI system involved.

For instance, places its most stringent compliance obligations on "providers", which are defined as "a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge."42Other actors in the AI value chain are subjected to different responsibilities and obligations, including:

  • Deployers: Defined as "any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity."43
  • Importers: Defined as "any natural or legal person located or established in the [EU] that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the [EU] market."44
  • Distributors: Defined as "any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the [EU] market."45

A. High-Risk AI Systems

Chapter 3 of Title III provides for obligations of actors along the high-risk AI system value chain, which are classified under the EU AI Act through specific roles. Building off of the previous chapter which stipulates more general requirements (see Section V), Chapter 3 assigns responsibility according to an entity's role. However, roles are neither absolute, nor static in nature. More specifically, consider Article 28 which provides that distributors, importers and deployers (all downstream actors) can be subjected to the obligations of a provider if they: (i) label a high-risk AI system with their name or trademark after it has been introduced into market or put into use;46 (ii) substantially modify a high-risk AI system that has already been placed on the market or into service in a way that it remains a high-risk system in accordance with Article 6; and (iii) alter the intended use of an AI system, including a general-purpose AI system not previously categorized as high-risk, that has been launched on the market or put into use, so that it then qualifies as a high-risk AI system per Article 6.47

B. Providers

Given the importance of their role in the AI value chain, the EU AI Act subjects providers to some of its most onerous obligations. Among the most notable, is the obligation to have the AI system undergo a conformity assessment, to ensure the AI system's compliance with accessibility requirements, and to establish a quality management system.48 The EU AI Act provides for a fluidity of obligations, creating a system where certain role obligations operate in conjunction with one another. For example, providers have an incident response obligation to take corrective actions if they consider that they AI system may not be in conformity with the EU AI Act.49 This obligation operates in congruence with a deployer's obligation to inform of any system presenting risk, effectively creating a circular flow of information.50

C. Authorized Representatives

Canadian providers seeking to make their systems available on the EU market must first appoint an EU-authorized representative. Such authorized representatives should be capable of performing the tasks specified in the mandate received from such provider. Authorized representatives may however, terminate the mandate if they consider that a provider is acting contrary to its obligations. Consequently, Canadian providers are best served to comply with the EI AI Act to avoid potential scenarios where their AI system may no longer be able to operate in the EU by virtue of having no active authorized representative.

D. Importers

Importers are tasked with ensuring that the high-risk AI system in question is in conformity with the EU AI Act. By that token, if it has sufficient reason to consider that the high-risk AI system is not in conformity with the EU AI Act, it is must refrain from placing the system on the market until it has been brought into conformity. Such a scenario would undoubtedly impact Canadian exporters of such systems, further incentivizing such exporters to ensure that the system is in conformity prior to formal export to the EU.51

E. Distributors

Distributors must ensure that high-risk AI systems bear the required CE marking and must not make a high-risk AI system available on the market if it is not in conformity with the EU AI Act or if it presents a risk within the meaning of article 65(1).52

F. Deployers

Deployers are required to ensure that their usage of AI systems is in accordance with the instructions provided by the provider, in addition to assigning human oversight to natural persons who have the necessary competence.53 Additionally, deployers are required to monitor the operation of high-risk AI systems. Save for specific exceptions, deployers that are bodies governed by public law or private operators providing public services are also required to perform a fundamental rights impact assessment.54

VII. Compliance Obligations applicable to General Purpose AI Systems

In what marks a departure from its initial iteration, the EU AI Act provides for a separate set of obligations for persons involved in the GPAI model value chain. More specifically, the EU AI Act provides for (i) two subsets of obligations applying to providers of GPAI models and (ii) transparency obligations applying to providers and deployers of certain AI systems and GPAI Models. On the whole, these novel obligations represent an evolutionary response within the EU AI Act, to what is a rapidly changing AI environment.

A. Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI Models

Providers of AI systems intended to directly interact with natural persons must ensure that the concerned natural persons are informed that they are interacting with an AI system.55Conversely, deployers of AI systems that generate content constituting a deep fake must disclose that the content has been artificially generated or manipulated. In like manner, deployers of AI systems that generate or manipulate text which is published to inform the public on matters of public interest are required to disclose that said text has been artificially generated or manipulated.56 Such obligations will likely raise interest for efficient watermarking technologies (which ultimate efficacy remain uncertain).

B. Providers of GPAI Models

Save for the aforementioned free and open-source license carve out listed in the EU AI Act, provider obligations for GPAI models include co-operation with the AI office and national competent authorities, respect of EU copyright law and transparency through the disclosure of content used for training.57

C. Providers of GPAI Models with Systemic Risk

In addition to the obligations listed directly above, providers of GPAI models with systemic risk are subjected to additional obligations given the potentially unfavorable outcomes that can derive from such systems. This includes (i) performing model evaluations; (ii) assessing and mitigating possible systemic risks; (iii) tracking and reporting relevant information about serious incidents; and (iv) ensuring an adequate level of cybersecurity protection.58 This is an approach similar to the one implemented by the White House through its recent Executive Order on AI in the case of dual-use foundational systems.59

VIII. Oversight

In what represents a revamped governance framework, the EU AI Act provides for an AI office which seeks to co-ordinate and build up know-how at the EU level for the purposes of developing EU expertise and capabilities in the field of AI.60 Moreover, the EU AI Act provides for a scientific panel of independent experts to integrate scientific community input to its implementation.61Of a similar nature lies the creation of an advisory forum, which also aims to contribute input, albeit for stakeholders.62 Finally, the EU AI Act provides for an AI board (the "Board"), which serves as an advisory body to the Commission and EU member states, with member states being granted flexibility with respect to the appointment of their representatives. In fact, any person "belonging to public entities who should have the relevant competences and powers to facilitation coordination at national level and contribute to the achievement of the Board's task" may be appointed.63 Much of this revised governance structure can be linked to the rules related to GPAI models and the requirement to enforce them at the EU level.64

For monitoring purposes, Title VII establishes the creation of a database for high-risk AI systems (some of which are explicitly listed in Annex III and found in the table above). The list is maintained by the Commission in collaboration with member states.65 Providers of these high-risk AI systems will be required to register their AI systems before market placement or service activation. Additionally, Title VIII sets out the responsibilities of high-risk AI system providers concerning post-market monitoring, reporting and investigation of AI-related incidents and malfunctioning. Should any serious incident occur, providers will be required to inform the authorities of the member state where the incident occurred as soon as they become aware of (or reasonably suspect) a link between the AI system and the incident.66

IX. Timeline for Implementation

The European Parliament is likely to adopt the EU AI Act in or April or May of this year. Once this is done, the EU AI Act will enter into force on the twentieth day following its publication date,67with a first set of rules on prohibited AI systems starting to apply six months after this date, followed by the rules GPAI models and the Act's penalty provisions twelve months after that date. The rest of EU AI Act rules will be applicable two years after its entry into force, except for obligations relating to high-risk AI systems that are regulated products or are intended to be used as safety components of such products, which will start to apply only after thirty-six months.68

X. Conclusion

The adoption of the EU AI Act, which is expected in April 2024, marks the first major AI-specific regulation to arrive on the global stage. Legislators and regulators around the world are now beginning to move beyond high-level principles and policy frameworks to hard law, with severe penalties for non-compliance. Considering the EU AI Act's extraterritorial reach, such penalties should prompt Canadian businesses to begin positioning themselves for compliance with the EU AI Act immediately.

Businesses who operate exclusively in the Canadian AI ecosystem and may therefore not be subject to the EU AI Act should still pay attention to the momentous legislative development that is coming from the old continent. At the moment, the EU AI Act represents AI governance à l'Européenne, but it may very well become a global benchmark for the regulation of AI.

Footnotes

1. Luca Bertuzzi, "European Union squares the circle on the world's first AI rulebook", Euractiv, December 9, 2023: https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/.

https://www.europarl.europa.eu/news/en/press-room/20240212IPR17618/artificial-intelligence-act-committees-confirm-landmark-agreement

2. Luca Bertuzzi, Linkedin post, January 22, 2024: https://www.linkedin.com/posts/luca-bertuzzi-186729130_aiactfinalfour-column21012024pdf-activity-7155091883872964608-L4Dn/?utm_source=share&utm_medium=member_desktop

3. https://www.brookings.edu/articles/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/.

4. EU AI Act, Article 1(1).

5. EU AI Act, Article 1(1).

6. EU AI Act, Recital (78).

7. EU AI Act, Article 9.

8. EU AI Act, Annex III, ss. 5 and 6.

9. EU AI Act, Article 3, para 1.

10. EU AI Act, Article 2, para 5g(1).

11. EU AI Act, Recital (6).

12. EU AI Act, Article 52d.

13. EU AI Act, Article 5(1)(a).

14. EU AI Act, Article 5(1)(b).

15. EU AI Act, Article 5(1)(ba).

16. EU AI Act, Article 5(1)(c).

17. EU AI Act, Article 5(1)(da).

18. EU AI Act, Article 5(1)(db).

19. EU AI Act, Article 5(1)(dc).

20. EU AI Act, Article 5(1)(d).

21. EU AI Act, Article 71(3).

22. Karen L. Boyd & Nazanin Andalibi, "Automated Emotion Recognition in the Workplace: How Proposed Technologies Reveal Potential Futures of Work" (2023) 5: CSCW1 PACM on Human-Computer Interaction 95 at 2 (https://dl.acm.org/doi/abs/10.1145/3579528).

23. EU AI Act, Recital 26(c).

24. EU AI Act, Article 84.

25. EU AI Act, Article 6(1).

26. EU AI Act, Article 2, para (44b).

27. EU AI Act, Article 52c, para 1.

28. EU AI Act, Article 2, para 5g.

29. EU AI Act, Recital (60i).

30. EU AI Act, Article 2a (44c)

31. EU AI Act, Article 52a.

32. EU AI Act, Article 9.

33. EU AI Act, Article 9.

34. EU AI Act, Article 10.

35. EU AI Act, Article 11.

36. EU AI Act, Article 12.

37. EU AI Act, Article 13.

38. EU AI Act, Article 14.

39. EU AI Act, Article 14.

40. EU AI Act, Article 15.

41. EU AI Act, Article 42(2).

42. EU AI Act, Article 2, para 5(g)(2).

43. EU AI Act, Article 2, para 5(g)(4).

44. EU AI Act, Article 2, para 5(g)(6).

45. EU AI Act, Article 2, para 5(g)(7).

46. Except when contracts stipulate otherwise.

47. EU AI Act, Article 28.

48. EU AI Act, Article 16.

49. EU AI Act, Article 21.

50. EU AI Act, Article 29.

51. EU AI Act, Article 26.

52. EU AI Act, Article 27.

53. EU AI Act, Article 29.

54. EU AI Act, Article 29a.

55. EU AI Act, Article 52.

56. EU AI Act, Article 52.

57. EU AI Act, Article 52c.

58. EU AI Act, Article 52d.

59. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

60. EU AI Act, Recital (75a).

61. EU AI Act, Recital (75a).

62. EU AI Act, Recital (75a).

63. EU AI Act, Recital (76).

64. https://eucrim.eu/news/ai-act-parliament-and-council-reach-provisional-agreement-on-worlds-first-ai-rules/#:~:text=Governance%20structure,common%20rules%20across%20Member%20States.

65. EU AI Act, Article 60, para 1.

66. EU AI Act, Article 62.

67. EU AI Act, Article 85, para 1.

68. EU AI Act, Article 85.

To view the original article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.