ARTICLE
26 September 2025

EU's AI Code Of Practice – A Step Closer To Navigating The AI Regulation's Copyright Obligations

BB
Bech-Bruun

Contributor

Bech-Bruun is a leading full-service law firm in Denmark, serving a diverse clientele across Danish businesses, the public sector, and global corporations. With nearly 600 specialized employees, the firm provides expertise in all aspects of commercial law. Bech-Bruun prides itself on a high standard of service, deep specialization, and a collaborative approach, making it a trusted advisor to its clients.

The firm’s core values—quality, specialization, business insight, and teamwork—are fundamental to its operations. By understanding client goals and combining attention to detail with strategic foresight, Bech-Bruun effectively supports Danish clients on both domestic and international matters, while also advising foreign companies entering the Danish market. Bech-Bruun positions itself not just as a legal provider, but as a partner committed to guiding clients through complex legal landscapes in a dynamic global market.

The European Commission has published an AI Code of Practice providing guidance on EU copyright law compliance in relation to the AI Regulation.
Denmark Intellectual Property

The European Commission has published an AI Code of Practice providing guidance on EU copyright law compliance in relation to the AI Regulation. The Code of Practice is important for e.g. AI providers and enterprises exercising or having rights in data used to train AI models.

On 10 July 2025, the European Commission published a Code of Practice for general-purpose artificial intelligence (AI) – so-called GPAI – i.e. large language models such as ChatGPT, Gemini, Claude and Llama.

Developed by independent experts with input from many stakeholders, the Code of Practice was adopted by the European Commission and the AI Committee on 1 August 2025. It is designed to enable the industry comply with the GPAI rules of the AI Regulation, which came into force on 2 August 2025.

Nature of the rules

The Code of Practice is an optional tool to the benefit of enterprises in the industry and includes three main chapters on transparency, copyright, safety and security. However, providers who voluntarily sign the Code of Practice gain an administrative advantage as compared to other providers who must otherwise demonstrate compliance with the AI Regulation.

Adherence to the Code of Practice thus reduces the administrative burden on providers and creates a presumption of compliance with the rules of the Regulation. What is concerned is thus "soft law" or rules that will apply in practice.

Copyright rules

The copyright chapter addresses topics of interest to both AI developers and rights holders. In particular, the requirements of Article 53 of the AI Regulation regarding the obligations of GPAI model providers are addressed.

Copyright compliance policy

As for the requirement laid down in Article 53(1)(c) of the AI Regulation that GPAI model providers adopt a policy to comply with EU copyright law, in particular with regard to identifying and respecting right holders' so-called opt-outs (rights reservations), the AI Code of Practice emphasises that machine-readable opt-outs must be respected, including for example robots.txt protocols.

Requiring such an internal policy on the acquisition and use of training data also follows from Recital 106 of the AI Regulation, and it is emphasised in Recital 105 that any use of copyrighted content requires authorisation unless a relevant opt-out applies, such as for text and data mining under certain conditions.

The AI Code of Practice tightens the copyright policy requirement by requiring Code of Practice signatories to identify who in their organisation is responsible for the Code of Practice and establish a complaints mechanism for rights holders.

In addition, providers of GPAI models are required to implement technical security measures to ensure that the model does not reproduce the training content in its output to the effect that it constitutes copyright infringement. This raises the bar from pure risk management to action in relation to potential copyright infringement.

In relation to open-source models, there is the additional minor but meaningful requirement that users must be warned that it may still constitute infringement to use the model even if it is freely available.

Finally, it is stressed that technical protective measures must not be circumvented, which already follows from EU copyright law. As a new feature, signatories are required to avoid any website content known to commercially infringe copyright law when collecting training content by crawling the Internet.

Summary of content used for training

As for the requirement laid down in Article 53(1)(d) of the AI Regulation that providers must prepare and publish a sufficiently detailed summary of the content used to train the model, the AI Committee emphasises that such summary must be prepared in accordance with a template. Such template must be provided by the European AI Office in accordance with Recital 107 of the AI Regulation and will thus form part of the overall AI regulatory framework. The European AI Office is a new body set up under the European Commission in 2024 to ensure correct and uniform application of the AI Regulation across the EU.

Taking due account of the need to protect trade secrets and confidential business information, the summary should generally be broad in scope rather than technically detailed. This may, for example, take place by listing the most important data collections or data sets used to train the model and by providing a descriptive account of other data sources. Such information must enable parties with legitimate interests (right holders) to exercise their rights under EU law.

However, as otherwise included in previous drafts, there is no requirement for signatories to check the lawfulness of any material obtained from third-party databases.

Significance for enterprises

The AI Code of Practice is the most specific guidance to date on the complex rules of the AI Regulation, and several enterprises have already signed it, including Google (Gemini), Microsoft and Open AI (ChatGPT). Meta, the company behind Facebook and Instagram, has announced that they will not sign.

Furthermore, on 18 July, the European Commission published guidelines for GPAI models that further clarify the timeline of the AI Regulation and the commencement date of the rules on 2 August 2025.

Firstly, enterprises need to clarify whether they are considered to be providers of GPAI models and thus directly subject to the rules, and whether they are willing to sign up to the Code of Practice. Enterprises that either develop such a model in-house or outsource its development and market it as a tool for their customers to use will be considered AI model providers. Mere users of third-party models, however, fall outside the definition.

Enterprises that are not providers but use systems based on the large language models need to consider that they will have access to more information about how the language model is built and trained and then assess whether this is actionable.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More