ARTICLE
2 October 2025

Transparency Requirements Re Training Data And Compliance With Copyright Law Come Into Force In EU

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
Following the initial set of obligations that became applicable on 2 February 2025, a second wave of measures under the EU AI Act entered into force on 2 August 2025.
United Kingdom Intellectual Property

Following the initial set of obligations that became applicable on 2 February 2025, a second wave of measures under the EU AI Act entered into force on 2 August 2025. These include:

  • Obligations for General Purpose AI models – As far as IP is concerned, providers of GPAI models are now required to comply with transparency and documentation requirements, including the publication of summaries of training data, implementation of risk mitigation measures, and ensuring compliance with EU copyright law.
  • The designation of notifying authorities and notified bodies
  • The framework for penalties and enforcement

Final version of the EU AI Act Code of Practice

The final version of the EU AI Act Code of Practice for General Purpose AI Models was released on 10 July 2025 and is structured around three key chapters: 1) Transparency, 2) Copyright and 3) Safety and Security.Although voluntary and not legally binding, the Code (applicable from 2 August 2025) is expected to become a benchmark for copyright compliance in AI.

The Copyright chapter is particularly significant for GPAI model providers as it sets out five measures to be implemented by the signatories, namely major tech players such as Google, Microsoft and OpenAI:

  • Draw up a policy to comply with EU law on copyright and related rights for all GPAI models placed in the Union market and implement it and keep it up to date;
  • Reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web;
  • Identify and comply with rights reservations when crawling the World Wide Web;
  • Mitigate risks of generating copyright-infringing outputs through proportionate technical safeguards, and prohibit infringing uses in terms and policies;
  • Designate a point of contact and enable the lodging of complaints.

Guidelines for general-purpose AI models

On 18 July 2025, the European Commission published its Guidelines on the scope of the obligations for general-purpose AI (GPAI) models under the EU AI Act. While also non-binding, these Guidelines offer valuable insight into the likely approach that national regulators will adopt when interpreting and enforcing the provisions of the AI Act.

The Guidelines address the following key areas:

  1. What constitutes a GPAI model. In particular, the indicative criterion for a model to be considered a general-purpose AI model is that its training compute is greater than 1023 FLOP (floating point operations per second) and it can generate language (whether in the form of text or audio), text-to-image or text-to-video.
  2. When a GPAI model is considered to pose a systemic risk, specifying that the cumulative amount of computation used for the training of a general-purpose AI model measured in FLOP is considered to be a relevant metric for identifying high-impact capabilities.
  3. When modifications by downstream actors qualify them as GPAI model providers. In particular, a downstream actor becomes a new GPAI provider if the training compute used for the modification exceeds one-third of that used to train the original model.
  4. Nuances concerning the exemption applicable to open-source models. Based on this, the Guidelines clarify that certain transparency obligations relating to the training process will not apply if all of the following conditions are met:
  • The model is released under a free and open-source licence;
  • The model is not monetized;
  • Key technical information is made publicly available (including parameters, architecture, and usage information).

Stay ahead on AI regulation

The EU AI legal framework is evolving quickly and obligations for general-purpose AI models are becoming more onerous. Providers should start reviewing their policies, technical safeguards, and licensing strategies now. Contact our team for tailored advice and support.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More