ARTICLE
17 November 2025

What The Getty Images vs. Stability AI Decision Means For Tech Leaders

FH
Finnegan, Henderson, Farabow, Garrett & Dunner, LLP

Contributor

Finnegan, Henderson, Farabow, Garrett & Dunner, LLP is a law firm dedicated to advancing ideas, discoveries, and innovations that drive businesses around the world. From offices in the United States, Europe, and Asia, Finnegan works with leading innovators to protect, advocate, and leverage their most important intellectual property (IP) assets.
The UK High Court concluded that generative AI models do not store copies...
United States Technology
Varuni Paranavitane’s articles from Finnegan, Henderson, Farabow, Garrett & Dunner, LLP are most popular:
  • within Technology topic(s)
  1. Secondary infringement - No Copy Inside the AI Model: The UK High Court concluded that generative AI models do not store copies of the original images they are trained on, which means such models are not "copies" in the legal sense, alleviating some copyright concerns for developers.

  2. The Copyright 'Input' Question Isn't Settled: The case did not provide a clear precedent on whether it is lawful to use copyrighted material for training AI models in the UK, as Getty dropped this part of the claim, leaving developers in a legal gray area.

  3. Trademark Concerns - Logos in the Outputs: The ruling identified trademark infringement where AI-generated images contained "Getty Images" watermarks, suggesting that brand owners could have grounds to challenge the use of their marks in AI-generated outputs.

The UK just saw its first major ruling on AI and copyright — and it's a big deal for anyone building or using generative AI tools. The UK High Court has weighed in on Getty Images' case against Stability AI, the company behind Stable Diffusion. Getty argued that Stable Diffusion itself was an "infringing copy" of Getty's images because the system was trained on them. The court, as it turned out, disagreed.

The decision gives some clarity on how UK courts might handle copyright and trademark issues around AI training and outputs. Here are the issues that CIOs and AI developers really need to worry about.

No 'Copy' Inside the AI Model

After digging into how the model was built, the judge concluded that Stable Diffusion doesn't actually store the original images after it has been created— it just learns from patterns in the data. In plain English: the model doesn't keep copies of photos, so it can't be a "copy" in the legal sense.

That finding knocked out Getty's secondary copyright infringement claim. For AI developers, this part of the decision is reassuring — the court recognised the technical distinction between training on data and copying data.

The Copyright 'Input' Question Isn't Settled

Before the court could rule on whether the training process itself infringed Getty's copyright works (the so-called 'input' claim), Getty dropped that part of the case.

As such, we still don't have a UK precedent on whether using copyright-protected material for training models is lawful. Stability AI's defence was that its data wasn't stored or downloaded in the UK — and Getty's withdrawal means we'll have to wait for another case (or new legislation) to get a clear answer.

Trademark Trouble: Logos in the Outputs

The court did find some limited trademark infringement. A few AI-generated images still contained "Getty Images" watermarks — and that was enough for the judge to say: yes, that's trademark infringement.

This opens the door for brand owners to argue that their marks are being misused if AI systems generate content showing their logos or brand names, even accidentally. In short, copyright isn't the only legal risk for AI output — trademarks are now on the table, too.

What Happens Next

The ruling leaves AI developers and creatives in limbo on the biggest question: can you legally train AI models on copyrighted works without permission in the UK?

Right now, there's no clear "yes" or "no." The UK IPO carried out a Consultation seeking responses from the relevant stakeholders, and Parliament might eventually step in. Until then, every company training or deploying generative AI models should assume there's still legal risk around the data used for training. US courts have taken a more flexible "fair use" approach in some cases, but it's unclear if UK courts will follow that path if and when the next UK case comes before the courts.

Bottom Line for Tech Leaders

First of all, the good news: The court confirmed AI models aren't actually 'copies' of the data they're trained on. The bad news, however, is that we still don't know if training on copyrighted material is legal in the UK. Trademark owners, too, have a new angle to attack AI output that mimics their brand.

What might the next steps be? That's still unclear, but CIOs and AI developers should watch out for any appeals arising from the Getty case – and also for any clarifications or changes to UK copyright law.

Originally published by Tech Monitor

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More