ARTICLE
21 July 2025

Another EU AI Act Milestone – Nearly Final Version Of The General-Purpose AI Code Of Practice Published

W
WilmerHale

Contributor

WilmerHale provides legal representation across a comprehensive range of practice areas critical to the success of its clients. With a staunch commitment to public service, the firm is a leader in pro bono representation. WilmerHale is 1,000 lawyers strong with 12 offices in the United States, Europe and Asia.
On July 10, 2025, the European Commission released a nearly final version of the General-Purpose AI (GPAI) Code of Practice (Code) under Regulation (EU) 2024/1689 (AI Act).
United States Intellectual Property

On July 10, 2025, the European Commission released a nearly final version of the General-Purpose AI (GPAI) Code of Practice (Code) under Regulation (EU) 2024/1689 (AI Act).

This blog post outlines the purpose, applicability, and content of the Code. Designed as a voluntary compliance mechanism, the Code plays a crucial role in the interim regulatory landscape leading up to the application of the AI Act's GPAI provider obligations from August 2025. The Code offers practical guidance for AI providers on meeting specific obligations under the AI Act and is especially relevant for demonstrating early adherence to Articles 53 and 55, which set out requirements for transparency, copyright compliance, and systemic risk mitigation in the development and deployment of GPAI models.

Background

  • GPAI Provider Obligations. As explained in our previous blog post, the AI Act introduces specific obligations for GPAI models, which are capable of performing a wide range of tasks and integrating into various systems. Providers of GPAI models must maintain detailed technical documentation, publish summaries of training data, comply with EU copyright law, and share information with regulators and downstream users. Noncompliance can result in fines of up to €15 million or 3% of global revenue, whichever is higher.
  • Models with Systemic Risk. Providers offering GPAI models with systemic risk face even stricter requirements, including model evaluations, risk mitigation, incident reporting, and cybersecurity measures. Already, a significant number of providers globally have developed models that surpass the compute threshold above which a GPAI model is presumed to have high-impact capabilities and therefore presents systemic risk. The European Commission may also consider whether a GPAI model has high-impact capabilities, taking into account various technical criteria that it may amend.

Objectives and Legal Function

  • Purpose. The Code of Practice serves as a transitional compliance tool. It is referenced under Article 56 of the AI Act and is intended to:
    • Help providers of GPAI models meet transparency, copyright, and safety obligations under the AI Act;
    • Offer a structured way to present and maintain documentation, policies, and technical safeguards; and
    • Allow competent authorities, including the AI Office, to assess whether providers are meeting their legal duties under the AI Act.
  • Applicability. The Code is structured around two tiers of applicability.
    • All GPAI model providers are expected to comply with the Transparency and Copyright chapters of the Code.
    • Only providers of GPAI models with systemic risk are subject to the Safety and Security chapter of the Code.

Chapter 1 of the Code: Transparency

The AI Act requires GPAI providers to maintain robust documentation and share information with both regulators and downstream users (Articles 53(1)(a) and (b)). The Transparency chapter of the Code addresses these duties through three primary measures.

  • Documentation Requirements. Providers can create and update their model documentation using the Code's standardized Model Documentation Form, capturing details such as model architecture, training methods, distribution channels, and capabilities.
  • Information Sharing. Providers can make relevant documentation accessible to downstream users and provide further information upon request from the AI Office or competent national authorities.
  • Integrity Controls. Providers are expected to implement quality assurance and security controls to preserve the accuracy and integrity of documentation over time.

Chapter 2 of the Code: Copyright

The AI Act requires GPAI providers to establish policies that ensure compliance with EU copyright and related rights law (Article 53(1)(c)). The Copyright chapter provides concrete guidance to help providers develop and implement policies that govern the lawful use of training data, identify and comply with rights reservations when crawling the web, and mitigate the risk of infringing outputs.

  • Policy Adoption. Providers should implement a copyright policy that includes internal responsibilities and procedural safeguards.
  • Lawful Data Use. When collecting training data through web crawling, providers should ensure they do not bypass technological access restrictions and should avoid sites known for persistent copyright violations.
  • Recognition of Rights Reservations. Providers should honor machine-readable rights reservations expressed by rightsholders and support evolving industry standards for signaling such reservations.
  • Mitigation of Infringing Outputs. Providers are expected to deploy technical safeguards and policy restrictions to reduce the likelihood of infringing content being generated by their models.
  • Complaint Mechanisms. A point of contact and process should be in place to receive and respond to concerns from rightsholders.

Chapter 3 of the Code: Safety and Security

For GPAI models that meet the threshold for systemic risk, the Safety and Security chapter provides a comprehensive framework aligned with Article 55 of the AI Act.

  • Framework Development. Providers should establish, implement, and update a safety and security framework detailing how they assess and mitigate systemic risks throughout the model life cycle.
  • Risk Assessment and Mitigation. Providers should conduct structured evaluations, define risk acceptance criteria, and implement technical and organizational measures to manage risks. This includes both pre-release and post-market monitoring.
  • Incident Reporting and External Oversight. Providers are expected to document serious incidents and engage with independent evaluators. Reports should be submitted to the AI Office.
  • Cybersecurity Safeguards. Appropriate technical protections should be in place to prevent unauthorized access, model exfiltration, or tampering, including for insider threats.

Next Steps

  • Is This Enough? In the coming weeks, Member States and the European Commission will examine whether the Code is sufficient to support compliance with the GPAI-related obligations under the AI Act.
  • Early Signatories. A major US company and a major EU company that are building popular GPAI models have already agreed to sign the Code.
  • Remaining Questions. Supplementary materials such as guidelines clarifying key GPAI terms and concepts and the training data disclosure template are still pending, creating uncertainty around full compliance expectations. The European Commission is considering a grace period for the Code's signatories, but its duration and scope are unclear. Also undecided is whether providers may selectively commit to parts of the Code. How these gaps are addressed will be critical to industry uptake and compliance under the AI Act.

The authors would like to thank Jess Miller for her assistance in preparing this blog post.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More