Following the initial set of obligations that became applicable on 2 February 2025, a second wave of measures under the EU AI Act entered into force on 2 August 2025. These include:
- Obligations for General Purpose AI models – As far as IP is concerned, providers of GPAI models are now required to comply with transparency and documentation requirements, including the publication of summaries of training data, implementation of risk mitigation measures, and ensuring compliance with EU copyright law.
- The designation of notifying authorities and notified bodies
- The framework for penalties and enforcement
Final version of the EU AI Act Code of Practice
The final version of the EU AI Act Code of Practice for General Purpose AI Models was released on 10 July 2025 and is structured around three key chapters: 1) Transparency, 2) Copyright and 3) Safety and Security.Although voluntary and not legally binding, the Code (applicable from 2 August 2025) is expected to become a benchmark for copyright compliance in AI.
The Copyright chapter is particularly significant for GPAI model providers as it sets out five measures to be implemented by the signatories, namely major tech players such as Google, Microsoft and OpenAI:
- Draw up a policy to comply with EU law on copyright and related rights for all GPAI models placed in the Union market and implement it and keep it up to date;
- Reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web;
- Identify and comply with rights reservations when crawling the World Wide Web;
- Mitigate risks of generating copyright-infringing outputs through proportionate technical safeguards, and prohibit infringing uses in terms and policies;
- Designate a point of contact and enable the lodging of complaints.
Guidelines for general-purpose AI models
On 18 July 2025, the European Commission published its Guidelines on the scope of the obligations for general-purpose AI (GPAI) models under the EU AI Act. While also non-binding, these Guidelines offer valuable insight into the likely approach that national regulators will adopt when interpreting and enforcing the provisions of the AI Act.
The Guidelines address the following key areas:
- What constitutes a GPAI model. In particular, the indicative criterion for a model to be considered a general-purpose AI model is that its training compute is greater than 1023 FLOP (floating point operations per second) and it can generate language (whether in the form of text or audio), text-to-image or text-to-video.
- When a GPAI model is considered to pose a systemic risk, specifying that the cumulative amount of computation used for the training of a general-purpose AI model measured in FLOP is considered to be a relevant metric for identifying high-impact capabilities.
- When modifications by downstream actors qualify them as GPAI model providers. In particular, a downstream actor becomes a new GPAI provider if the training compute used for the modification exceeds one-third of that used to train the original model.
- Nuances concerning the exemption applicable to open-source models. Based on this, the Guidelines clarify that certain transparency obligations relating to the training process will not apply if all of the following conditions are met:
- The model is released under a free and open-source licence;
- The model is not monetized;
- Key technical information is made publicly available (including parameters, architecture, and usage information).
Stay ahead on AI regulation
The EU AI legal framework is evolving quickly and obligations for general-purpose AI models are becoming more onerous. Providers should start reviewing their policies, technical safeguards, and licensing strategies now. Contact our team for tailored advice and support.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.