- within Intellectual Property topic(s)
- in Canada
- within Intellectual Property, Environment and Law Department Performance topic(s)
Artificial intelligence thrives on cumulative learning and optimisation and, unlike traditional software, the competitive edge of AI systems often lies not in the source code itself, but in how the systems themselves are trained. For this reason, AI developers increasingly rely on trade secrets in addition to, or instead of, patents to protect key assets. While AI-related patents, especially at EU level, may face significant challenges (e.g. excluded subject matter or lack of technical effect), trade secret protection can be immediate, flexible and last indefinitely if secrecy is duly preserved.
In practice, some of the most valuable assets in AI ecosystems are ideally suited for trade secret protection, including:
- Proprietary data pipelines and preprocessing techniques
- Reinforcement learning strategies and training protocols
- Model weights, architectures, and internal algorithms
- Prompt and instructions
- Safety guardrails and evaluation frameworks
Yet, these are the same elements that, under the EU AIAct's1 sweeping transparency framework, may need to be documented and partially disclosed. This creates a new legal paradox: how to comply with unprecedented disclosure obligations without eroding the confidentiality that underpins competitive advantage and constitutes the hidden value of trade secrets.
Trade secrets have long been the “silent engine” of innovation - requiring no registration or publication, and retain their value as long as secrecy is maintained. As the EU's transparency regime challenges this model, AI system providers, implementers and legal professionals must now navigate a delicate balance: ensuring compliance with disclosure obligations while safeguarding the hidden value embedded in confidential know-how. This is not just a regulatory issue - it's a strategic imperative for IP protection in the AI era.
The EU transparency landscape
The EU AI Act, which entered into force in 2024, is being phased in progressively through 2025 and 2026. One of its most impactful pillars is transparency. The Act introduces a tiered framework requiring companies to disclose information based on the type of AI system and the associated risk level. This framework operates across three distinct layers:
- User-facing transparency (Article 50): AI systems designed to interact directly with natural persons, such as chatbots, synthetic media, and deepfake generators, must clearly inform users when they are engaging with AI or encountering AI-generated content. These obligations are primarily aimed at consumer awareness and are unlikely to interfere with trade secret protection.
- High-risk system documentation (Article 13): Providers of high-risk AI systems must deliver “clear, complete and correct” instructions, including the system's intended purpose, accuracy metrics, and details of the training, validation and testing data used. While intended to ensure safety and accountability, these disclosures may encroach upon proprietary methods and datasets that developers would otherwise protect as trade secrets.
- General-purpose AI (GPAI) obligations (Article
53): All GPAI model providers must:
- prepare and maintain comprehensive model documentation;
- share detailed information with the EU AI Office or national authorities on request;
- give downstream developers sufficient technical details to allow safe integration; and
- publish summaries of the training content used.
For models deemed systemic, further details are required, including evaluation protocols and risk-mitigation testing.
It is within these latter two categories - high-risk systems and GPAI models - that the tension between transparency and trade secret protection becomes most acute. The requirement to disclose detailed instructions, training methodologies, and even summaries of training data risks exposing the very elements that constitute a company's competitive edge.
Practical relevance: Transparency meets confidentiality
Recent cases illustrate this tension, with trade secret defences raised against calls for increased transparency. In the United States, for example, The New York Times v. OpenAI and Microsoft concerns allegations that OpenAI's models were trained on the Times' copyrighted content. The plaintiffs seek disclosure of the underlying training data and model documentation to substantiate their claims, while OpenAI argues that such information constitutes core trade secrets central to its competitive advantage. It remains to be seen how the US courts will balance the competing interests of transparency and confidentiality in that context.
A case where such a defence did not (fully) prevail is CK v. Magistrat der Stadt Wien (C-203/22). There, the Court of Justice of the European Union (CJEU) held – in relation to data privacy regulations – that an operator relying on automated decision-making must provide “concise, transparent, intelligible and easily accessible” information about the criteria used, even where those criteria are protected as trade secrets. The judgment confirms that trade-secret protection cannot automatically override transparency obligations where fundamental rights or accountability concerns are at stake.
Together, these developments illustrate how AI providers may increasingly find themselves caught between transparency expectations and the need to preserve proprietary knowledge, a tension that the EU AI Act's disclosure duties under Articles 13 and 53 are likely to intensify.
Operationalising transparency: The Commission's implementation tools
Two key instruments adopted in mid-2025 have transformed the Act's transparency duties from aspirational policy into day-to-day compliance requirements:
(a) The GPAI Code of Practice
Finalised in July 2025 after extensive industry consultation, the GPAI Code of Practice provides practical guidance for general-purpose AI model providers on meeting their transparency, safety and documentation obligations under the EU AI Act. It introduces a Model Documentation Form – a semi-standardised template that providers can complete to demonstrate compliance. Although voluntary, adherence will be treated by the Commission and national authorities as strong evidence of good practice, consistent with the AI Act's risk-based approach. The Form requires providers to describe:
- architecture design and training methods;
- data sources, curation logic and filtering tools;
- human-feedback loops and fine-tuning datasets; and
- safety-testing methodologies.
The Code explicitly reaffirms that trade secrets must be protected and that all submitted information is subject to confidentiality obligations. However, by leaving discretion to regulators – such as allowing them to judge whether specific redactions are 'reasonably justified' – it introduces interpretive flexibility that has created uncertainty for many AI providers.
(b) Commission guidelines on GPAI transparency
Published in September 2025, the Guidelines clarify how transparency and documentation obligations under the AI Act are expected to be applied in practice. Key takeaways include:
- regulators may request underlying data samples or evidence of provenance audits;
- confidentiality claims must be justified and limited, companies cannot rely on blanket assertions of trade secret protection to withhold information; and
- authorities are expected to handle disclosed material under professional secrecy rules.
The Guidelines also bridge the AI Act with copyright-law transparency duties, requiring GPAI providers to document how they ensured that training content does not infringe EU copyright. Companies must disclose sufficient information to enable oversight, risk assessment and downstream integration, while limiting confidentiality claims to what is strictly necessary to protect legitimate commercial interests. Put simply: transparency is now the rule, confidentiality the exception.
This creates a new legal and strategic tension: how to comply with transparency obligations without enabling reverse engineering, information leakage or unintended technology transfer to competitors and partners. Without a structured approach, disclosures made for compliance may risk undermining future enforcement of trade secret rights.
Navigating the tension: From reactive disclosure to structured transparency
To navigate this landscape, companies should move from reactive disclosure to structured transparency management, by:
- Mapping sensitive information early – classifying AI assets (data, weights, code, fine-tuned layers, pipelines) and flagging those with trade secret relevance
- Preparing “defensible transparency” – disclosing what is necessary, but framing technical explanations at an appropriate level of abstraction that avoids reconstruction risk
- Embedding contractual protection – ensuring downstream sharing is governed by AI-specific restrictions (e.g. model reconstruction bans, usage limits, monitoring rights).
- Maintaining audit evidence – a clear confidentiality rationale file should document why withheld information qualifies as trade secret and demonstrate continued compliance
Conclusion
The EU AI Act marks a decisive shift from voluntary disclosure to enforced transparency. For AI developers and rights-holders, this means that trade secret protection can no longer rely on silence alone. The future of compliant AI innovation will depend on designing systems and governance processes that preserve confidentiality by design – treating transparency not as a threat, but as an element to be managed strategically within a broader IP and compliance framework, while closely monitoring the rollout of AI-related provisions.
Footnote
1. Regulation (EU) 2024/1689 f the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the "AI Act")
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]