ARTICLE
19 November 2025

Collision Of Continental And Common Law Approaches To Generative AI Training

Two landmark court decisions issued within a week of each other in November 2025, namely, GEMA v. OpenAI and Getty Images v. Stability AI, have created a striking legal divergence...
Malta Technology
GTG are most popular:
  • within Immigration, Consumer Protection and Tax topic(s)
  • with readers working within the Banking & Credit, Chemicals and Oil & Gas industries

Generative AI Training

Two landmark court decisions issued within a week of each other in November 2025, namely, GEMA v. OpenAI and Getty Images v. Stability AI, have created a striking legal divergence that fundamentally reshapes the international landscape for artificial intelligence copyright liability. Whereas the Munich Regional Court established sweeping protections for copyright holders by treating model "memorisation" as actionable reproduction, the UK High Court left British jurisprudence conspicuously silent on the merits of AI training liability (Read more here.) One must also consider the third judicial approach emerged from Hamburg's September 2024 judgment in Robert Kneschke v. LAION e.V., which provisionally upheld text-and-data-mining ("TDM") privileges for research purposes.

These contrasting rulings expose irreconcilable visions of copyright protection in the AI age and create acute practical risks for multinational AI developers navigating incompatible legal regimes.

GEMA v. OpenAI

On November 11, 2025, the Munich Regional Court (Landgericht München I) (the "Court") handed down what it called one of the first landmark AI copyright rulings in Europe, determining that OpenAI had violated German copyright law by training ChatGPT (Specifically GPT-4 and GPT-4o) on nine protected German songs without licensing. According to the press release, the Court's reasoning proved remarkably sweeping. Rather than focusing solely on whether the chatbot reproduced lyrics in its outputs, the judges held that the mere incorporation of protected works into the model's parameters, what the court termed "memorisation", itself constituted copyright infringement, independent of any subsequent reproduction.

The Court rejected OpenAI's technical argument that large language models do not store or copy specific training data but rather reflect statistical correlations extracted from entire datasets, a synonymous tune to what Stability held in its bout with Getty; only this time, the Court held a juxtaposing view: that when a model has the ability to output recognisable passages from protected works in response to simple prompts, the causation between training data and output establishes that memorisation has indeed occurred, and such memorisation, in turn, constitutes reproduction under both German legislation and the InfoSoc Directive.

Rejection of the TDM Exception

Critically, the Court rejected OpenAI's reliance on the TDM exception transposed by virtue of the DSM directive; a contrasting stance to OpenAI's argument that TDM exceptions permitted its training activities because they applied to preparatory copying for analytical purposes. The Court found this interpretation untenable, and held that once a language model reproduced protected works with sufficient fidelity to be recognisable, the scope of 'permissible' TDM preparation exceeded its statutory boundaries, and thus, exploitation of copyright interests had plainly occurred.

As such, the Court unequivocally held that the TDM exception, a once-regarded statutory safe harbour that many have ostensibly contemplated for AI applications, provides no refuge from copyright liability when generative models can reproduce the training material.

Liability on the Developer

A notable parallel with the Getty judgment is its imposition of direct and unqualified liability on AI developers. Unlike arguments that developers ('Provider' within the EU AI Act's context) merely furnish neutral technical tools comparable to search engines or hosting platforms, the Court also recognised that training, distributing, and maintaining an AI model constitutes "active behaviour" triggering copyright liability. Once again, it is the developer and not necessarily the end-user that bears responsibility for infringing outputs.

Practical Implications

The cumulative effect of these rulings is legal bifurcation. Germany seems to now signal that licensing agreements or rights-holder opt-out mechanisms may become operational prerequisites to lawful model development. Whilst the UK appears to remain procedurally opaque, offering neither a safe harbour nor substantive guidance to developers. Meanwhile, pending references before the CJEU threaten to impose a fourth interpretive framework across Member States.

For AI developers that intend to develop or place within the EEA AI systems, the Munich decision eliminates the assumption that technical opacity or distributed parameter storage could justify unlicensed use of copyrighted material. Whilst the case can be appealed, at this stage, developers can no longer safely maintain that training constitutes mere process rather than infringing reproduction.

This legal fracture underscores the urgent necessity for harmonisation. Giving the differing judicial views, the AI industry will inevitably operate in a state of jurisdictional chaos conflicting with legal certainty and innovation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More