SUMMARY

  • Draft AI Act – What Happened So Far: Back in April 2021, the European Commission provided a draft regulation laying down harmonized rules on artificial intelligence (AI Act) aimed at safeguarding fundamental EU rights and user safety (please see our Client Alert: Draft EU Regulation for Artificial Intelligence Proposes Fines of up to 6% of Total Annual Turnover). This draft of the EU Commission was followed by a further draft of its own by the Council of the European Union in December 2022 and finally by an independent draft of the European Parliament in June 2023.
  • Generative AI: The recent emergence of widespread publicly available generative AI applications has led to last-minute proposals for regulatory amendments to the AI Act, most notably a globally unique transparency obligation regarding copyrighted material used as training data. Other copyright issues related to generative AI remain unanswered for now.
  • Consolidation of the Various Drafts – Trilogue: The EU Commission, the EU Council, and the EU Parliament have now entered into the final closed-door negotiations ("Trilogue") to agree on a final text of the EU's ambitious AI Act based on their three diverging proposals. In particular, the definition of AI, the risk classification of AI, and the interplay between existing laws and the AI Act to avoid double regulation will be main discussion points.

This update gives a comprehensive overview of where the three legislative bodies agree, and which parts of the draft AI Act will still require a joint agreement over the course of the Trilogue.

I. What Is to Be Regulated by the AI Act?

As a Regulation, the AI Act will be directly applicable and immediately enforceable in the Member States upon its entry into force. The Regulation covers all sectors of AI applications and is not limited to a specific area of law. The general structure of the AI Act is a risk-based approach to AI regulation, from banning AI systems with unacceptable risks, to high-risk AI systems subject to a wide range of obligations for providers, users, importers, and distributors, to general obligations and principles for all AI applications. However, the remaining open issues have sparked a heated discussion within the Trilogue as well as between the EU and industry organizations.

1. Safe Bets: Provisions That Will Most Likely Be Included in the AI Act

Regulation of "AI Systems"

  • Consensus: "AI systems" have to be regulated and will thus be the subject matter of the AI Act.
  • Trilogue discussion: The exact definition of AI is a controversial matter. So far, definitions of foundation models, general purpose AI and generative AI are partly used with different meanings and in different contexts in the three different drafts of the AI Act. Furthermore, the proposals differ mainly on whether to focus on software developed through machine learning, logic, knowledge-based, and/or statistical approaches, or on the autonomy of machine-based systems. Another open question is whether to exclude certain use cases of AI from the scope of the Act (e.g., for military, scientific research, non-professional activities), which would leave these matters to national legislators and arguably undermine the harmonization of AI regulation in the EU. The Trilogue will also have to address the Parliament's proposal for an opening clause for Member States and the EU to regulate the protection of workers' rights in relation to the use of AI by employers.

Obligating (Global) Providers, Users/Deployers, Importers, and Distributors

  • Consensus: The AI Act will include obligations on providers, users, importers, and distributors of AI systems to varying degrees. It will apply to all AI systems used in the EU market, regardless of the location of the party subject to the obligations.
  • Trilogue Discussion: The Parliament's proposal slightly deviates from the EU Commission's wording and has been understood as limiting the EU AI Act's application to AI systems that are intentionally (as opposed to factually) used in the EU.

Banning AI Systems with Unacceptable Risks

  • Consensus: As the most severe step in its risk-based approach, the AI Act will prohibit AI practices that are considered unacceptable because they contravene EU values such as fundamental rights. This ban will apply to AI applications with functions such as (i) distorting human behavior by subliminal techniques, (ii) exploiting vulnerabilities due to, e.g., age or physical and mental abilities to harmfully distort behavior, (iii) social scoring (by commercial actors), and (iv) real-time biometric identification.
  • Trilogue Discussion: Council and Parliament want to ban social scoring also for private actors. The diverging positions on the use of AI for real-time biometric identification are likely to be another controversial issue. Among others, the Parliament proposes to ban certain AI applications, such as biometric categorization systems based on sensitive personal characteristics, which critics say will restrict legitimate and low-risk AI applications, such as bias-correction training for content moderation tools on social media platforms.

Specific Regulation of "High-Risk" AI Systems

  • Consensus: Providers, users, importers, and distributors of high-risk AI systems will be subject to specific obligations. The AI Act will include an annex with a list of specific AI systems that are considered high-risk.
  • Trilogue Discussion: The three AI Act proposals take different approaches to what constitutes a high-risk AI system.
    • The exact content of the high-risk AI list is not yet certain. Both Council and Parliament have altered the Commission's list, for example, regarding AI systems used for deep fake detection by law enforcement authorities, life and health insurance, and education. Notably, the Parliament proposes to classify as high-risk AI systems that are used by social media platforms in their recommender systems for user-generated content if the platform is considered a very large online platform (VLOP) under the recently passed Digital Services Act (DSA). As the DSA already subjects such VLOPs to extensive obligations of transparency, audit, risk assessment, and mitigation, stakeholders are concerned that the classification as a high-risk AI system may lead to difficulties of overlapping regulation.
    • Council and Parliament propose that listed AI systems should only be considered high-risk if they have an actual likelihood of posing a significant risk to health, safety, fundamental rights, or the environment. It remains to be seen whether the final Act will include the Parliament's approach that the competent national authority must be notified of listed AI systems without an actual risk; the provider would run the risk of being fined for misclassification if the AI system is placed on the market during the three-month review period. Stakeholders have voiced concerns that authorities do not have sufficient capacities to handle such a notification procedure, thereby delaying the launch of new technology.

The Trilogue also needs to find a compromise between the different approaches to the obligations related to high-risk AI systems.

  • Providers of high-risk AI systems will face a comprehensive set of obligations, from high-quality datasets and transparency to risk management, post-market monitoring, and conformity assessment. It will be seen whether the final AI Act will include the providers' (and, where applicable, users') obligation to grant the national competent authority or the EU Commission access to the logs automatically generated by the AI system, as proposed by the Parliament. The Parliament also wants to regulate the contractual relation between the provider and the supplier of tools, services, components, or processes used or integrated in the system: unilaterally imposed "unfair contract terms" would be not binding on micro-enterprises, SMEs, and startups. Regarding a potential overlap of conformity and certification obligations under the AI Act and under sectoral specific regulation, industry organizations are currently pushing for legal certainty, requesting that a double compliance burden be avoided.
  • Users may use high-risk AI systems only in accordance with the provider's instructions, must keep records of their input data, and must monitor for evident anomalies. In addition, the Parliament wants to subject users to a detailed fundamental rights impact assessment of the system. Where users exercise control over the system, they shall implement competent human oversight and monitor robustness and cybersecurity measures.
  • Importers of high-risk AI systems are required to ensure that the provider has carried out the conformity assessment and prepared the technical documentation and that the system bears the conformity mark. The Parliament wants such importers to also make sure that an authorized representative has been appointed and to not place the system on the market if they consider it not to be in conformity with the AI Act.
  • Distributors of high-risk AI systems are required to ensure that the system carries the CE marking, the necessary documentation, and instructions and must not place the system on the market if they consider it to not be in conformity with the obligations imposed by the AI Act. Given the different set of suggested obligations under the Parliament's draft, such as the authorized representative, the three legislative bodies still must agree on a joint position.

General Obligations and Principles for All AI Solutions

  • Under the AI Act's risk-based approach, even AI systems that are not considered high-risk are subject to general obligations, such as informing people that they are interacting with an AI system and disclosing the artificial generation or manipulation of content in the case of deep fakes. It remains to be seen whether the final Act will also include the Parliament's more detailed approach of incorporating non-binding guiding principles (e.g., on human agency and oversight, diversity and fairness, social and environmental well-being) in standardization requests and recommendations for technical guidance by the EU Commission and the AI Office.

Imposing Severe Fines

  • Failure to comply with the obligations of the AI Act will be punishable by heavy fines. The maximum amount will have to be agreed during the Trilogue. The Commission proposes fines of up to EUR 30 million and 6% of the total worldwide turnover, while the Parliament aims for higher levels (up to EUR 40 million/7%).

2. What to Wait and See About

Transparency of Copyrighted Training Material

  • The Parliament wants to include a transparency obligation for copyrighted AI training data in the AI Act, as well as other specific obligations for generative AI (see III. below).

General Purpose AI or Foundation Models?

  • The Council wants to explicitly regulate "General Purpose AI" (GPAI), systems that can be used for many different purposes and can be integrated into another system that could subsequently become high-risk. Certain requirements for high-risk AI systems will apply to GPAI, but only after a specific EU implementation act.

    The Parliament instead wants to focus on foundation models. The definition is similar: An AI model that is trained on a wide range of data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks. Purpose-built AI systems or GPAI may be an implementation of a foundation model. The Parliament Draft underlines that there is significant uncertainty about how foundation models will evolve and that it is therefore essential to clarify the legal situation of their providers. Providers of foundation models will be subject to more specific obligations, such as identifying, reducing, and mitigating risks to health and fundamental rights, reducing energy and resource consumption, keeping extensive technical documentation available to national authorities for 10 years, and registering in the EU database. Industry organizations are fighting this approach, outlining that the risk and environmental impact analysis would be too expensive and hardly feasible for providers, considering the broad potential usability of foundation models.

AI Literacy

  • Finally, it remains to be seen whether the final AI Act will include the Parliament's proposal for an "AI literacy" provision to ensure that providers, users, and affected persons have sufficient skills, knowledge, and understanding to comply with and enforce the AI Act.

II. Generative AI: Specific Challenges & Proposed Regulation

The recent emergence of publicly available, highly capable generative AI systems has eventually attracted the attention of EU lawmakers. Text-to-image generation models, chat-like text-to-text services, and even text-to-music applications have become a global attraction in record time. Specifically for AI chatbots, "Large Language Models" (LLM)—AI models that can generate natural language text thanks to deep learning techniques based on massive datasets—have become well known.

This rapid development in the field of generative AI came too late for the AI Act proposals from the Commission and the Council. However, the EU Parliament's proposal explicitly addresses generative AI, but it remains to be seen whether these provisions will be included in the final Act. In any case, the main challenges of generative AI are not or are only indirectly addressed by the AI Act:

  • No Copyright Regulation: From the use of (web-scraped) copyrighted content as AI training data (along with the questions of copyright infringement, permitted text and data mining, and whether use of copyright content for AI training is subject to remuneration), to the protectability of AI-generated output and the search for the appropriate rights holder, to liability for copyright-infringing AI output, almost every single step in the process of generative AI is currently a matter of copyright controversy. The AI Act does not address any of these issues.
  • Transparency of Copyrighted Training Data: However, copyright makes an indirect appearance in the regulation of foundation models used in generative AI systems. Providers would be required to publicly disclose a "sufficiently detailed summary" of the copyrighted material used as training data, Art. 28b(4)(c). Such a transparency provision would currently be unique worldwide.

    During the Berlin Copyright Policy Conference, Axel Voss, MEP and rapporteur on the AI Act for the European Parliament's Committee on Legal Affairs (JURI), expressed optimism that this provision will pass the Trilogue and thus be part of the final AI Act. Mr. Voss also emphasized that the Spanish EU Council presidency, under whose leadership the Trilogue is being held, has been surprisingly open to copyright law issues in AI. While it is unlikely that copyright provisions will be added to the AI Act, this could lead to a specification of the transparency obligation. Currently, the technical feasibility of providing such a summary of training data is far from clear.
  • No High-Risk Qualification: The Parliament proposes not to consider foundation models, including generative ones, as high-risk AI systems. This was a highly controversial issue as earlier drafts considered text-generating AI applications in general to be high-risk.
  • Safeguards Against Unlawful Output: The Parliament Draft requires providers to train, design, and develop the foundation models in a way that ensures "adequate safeguards" against AI output in breach of EU law, Art. 28b(4)(b). The draft does not provide any substantive clarification on the necessary scope of this obligation but refers to the "generally acknowledged state of the art" and prohibits any prejudice to fundamental rights. It remains to be seen whether, if this obligation is included, the final text of the AI Act will provide further clarification on the subject.
  • Transparency for Interacting Parties: Providers are subject to the general transparency obligation to inform natural persons that they are interacting with AI, unless this is obvious from the circumstances and the context of use, Art. 28b(4)(a). Given that generative AI applications can produce deceptively realistic images and communicate impersonating real people, some stakeholders question whether these general transparency obligations can sufficiently mitigate the risks of "fake news" and user misdirection.
  • Continuous Monitoring: Recognizing the rapidly evolving developments in the field of AI, the Parliament Draft proposes in Recital 60h that the AI Office (to be established in Brussels) monitor and periodically assess the legislative and governance framework of foundation models and in particular of generative AI systems. This demonstrates that the Parliament Draft acknowledges the possible need for adjustments regarding generative AI in the near future.

III. Outlook

As the world's first comprehensive AI regulation, the EU AI Act endeavors to break new ground in striking a balance between mitigating the risks of AI to fundamental rights and principles and enabling AI to reach its full beneficial potential and thus the EU's global competitiveness. Per industry organizations, during current Trilogue negotiations, important fine tuning of the EU AI Acts provisions will be necessary to reach these goals. Stakeholders have voiced concerns about potential duplications of legal requirements and double regulation that could lead to confusion and differing enforcement actions. The many copyright-related questions surrounding generative AI remain unanswered by the EU lawmakers for the time being. However, the world's first transparency obligation for copyrighted AI training data, if included in the final AI Act, will certainly test the waters for technical and economic feasibility. Stay tuned for the finalization of the AI Act and the upcoming AI legislative measures in the coming months!

Susan Bischoff, a research assistant in the Technology Transactions Group in our Berlin office, helped with the preparation of this client alert.

Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP. All rights reserved