- with readers working within the Securities & Investment industries
- within Immigration topic(s)
At its outset, the case before the UK High Court in Getty Images v Stability AI looked set to provide a definitive outcome to the copyright debate dividing "two warring factions" (as the trial judge put it)—the creative industries and the AI industry. However, those hopes faded when Getty Images dropped its primary infringement claim at the end of the trial in June 2025, following insufficient evidence that Stability AI's generative AI model, Stable Diffusion, had been trained within the UK. The High Court's 4 November judgment is nevertheless significant because it clarifies how UK copyright law applies to AI models made available to the UK market but trained overseas. In this respect, it represents a win for the AI industry: the court rejected Getty Images' secondary copyright infringement claim and made only limited findings against Stability AI for trade mark infringement. With the copyright issues at the heart of the debate still unresolved, the UK government's next steps will be crucial in delivering legal certainty to both the AI and creative industries.
1 The background - withdrawal of the primary copyright infringement claim
In January 2023, Getty Images brought a claim against Stability AI alleging Stability AI had directly infringed its UK copyright by scraping and copying millions of images for use in training and developing the Stable Diffusion model. They also claimed that the output of Stable Diffusion reproduced a substantial part of those images and bore watermarks that infringed its trade marks.
However, during the trial, Getty Images dropped its primary copyright infringement claim, conceding there was insufficient evidence Stable Diffusion had been trained or developed in the UK. Stability AI also blocked the prompts that were alleged to produce infringing outputs, meaning Getty's request for relief on the output claim was met in practice, so that claim too was abandoned. This left secondary copyright infringement, trade mark, and passing off, for judgment.
2 The secondary copyright infringement claim
What was Getty's secondary copyright infringement claim?
The secondary infringement claim (under sections 22 and 23 of the Copyright, Designs and Patents Act 1988 (CDPA)) was that Stability AI imported into the UK, possessed, sold, hired or distributed an "article" which was, and which Stability knew (or had reason to believe) to be, an "infringing copy" of Getty's copyright works. The "article" was the model weights for Stable Diffusion (the learnable parameters controlling how the model processes inputs, which are adjusted during training).
According to section 27(3) of the CDPA, an article is also an "infringing copy" if it is imported into the UK, and its making in the UK would have constituted a copyright infringement. Getty argued that Stable Diffusion—trained abroad but made available and distributed in the UK—should be considered an "infringing copy" (according to section 27(3)) because, had the "making" (or optimisation) of the model weights been done in the UK, it would have constituted an infringement for the purposes of section 27(3) CDPA.
An "article" can be intangible, but an "infringing copy" must itself be a copy
The court supported Getty's argument that "articles" could, in the digital age, encompass intangible goods (such as software or network files downloaded from the cloud).
However, the court disagreed with Getty's interpretation of "infringing copy". The court found that, for the article to be an infringing copy under s27(3) CDPA, the article itself needed to store/contain (at some point) a copy of the copyright works – section 27 was not concerned with a process which (while it may involve acts of infringement) ultimately produced an article which was not itself an infringing copy.
The court found that Stable Diffusion did not, and had never, stored or contained a copy of the images used for training. In view of this, the "article" being imported could not be an "infringing copy" for the purposes of sections 22 and 23 of the CDPA, so the claim for secondary infringement failed.
3 Trade mark infringement and passing off
Getty Images claimed that Stability AI's models infringed its UK trade marks by generating synthetic images with Getty or iStock watermarks. Getty Images succeeded in part in their trade mark claims, but the judge described her findings as "both historic and extremely limited in scope".
The court first addressed, as a "threshold issue", whether any evidence showed synthetic images with watermarks were, in fact, generated by UK users. It found sufficient real-world evidence for some older versions (v1.x, v2.x) but not for newer models (SD XL, v1.6), so claims relating to the latter failed.
For earlier versions of Stable Diffusion (v1.x and v2.x), a few outputs with iStock watermarks were proven to appear in the UK. Liability under section 10(1) of the Trade Marks Act 1994 (TMA) (identical marks on identical goods or services) was found in a few scenarios for the iStock watermark. For section 10(2) TMA (similar marks which result in likely confusion), the court saw enough similarity and risk of confusion in a few specific instances (e.g. on a "Japanese Temple Garden" image) to constitute infringement. However, the judge made the point that the analysis was "highly fact sensitive" and that it was impossible to know how many (or even on what scale) watermarks had been generated in real life that would fall into the s10(2) category.
Getty's claims under section 10(3) TMA - dilution/reputational harm/unfair advantage in respect of their trade marks- were dismissed because there was no persuasive real-world evidence of these types of damage or of a change in the economic behaviour of consumers.
The judge also declined to make any finding on passing off, stating that it would not add anything of substance to the trade mark findings and that, in any event, there was no need to address the point further in the absence of detailed arguments from the parties on post-sale confusion (over which there is still some legal uncertainty in the context of passing off).
4 The policy debate continues
This decision leaves the debate around the scraping of copyright works for AI training still rumbling.
The trial judge was careful to define the court's limited remit from the outset:
Mrs Justice Joanna Smith DBE, Getty Images v Stability AI
It is not clear whether Getty will appeal. It issued a statement to say that it will be taking forward in its US case against Stability "findings of fact" from the UK ruling (e.g. in relation to its copyright works being used in training in the US). It also "urged governments, including the UK, to establish stronger transparency rules which are essential to prevent costly legal battles and to allow creators to protect their rights".
All eyes therefore now turn to the outcome of the UK government's public consultation on copyright and AI. The consultation proposed four broad policy options. The government originally presented, as a preferred option, an expansion of the text and data mining right under the CDPA, so that copyrighted works can be used for AI training, but with an opt-out to allow rightsholders to mark their works as "off-limits" for this purpose. The government's stance seems to have shifted since that time, following the creative industry's strong resistance to the opt-out proposal, and it looks now to be more interested in encouraging licensing options coupled with transparency requirements.
AI/copyright commitments in the Data (Use and Access) Act 2025
The AI/copyright debate at one point threatened to derail the Data (Use and Access) Bill, as copyright provisions proposed in the House of Lords were repeatedly rejected by the House of Commons.
Ultimately, the Data (Use and Access) Act received Royal Assent in June 2025, without introducing substantive changes to UK copyright law, but requiring the government to publish by March 2026:
1. an economic assessment of all four copyright/AI policy options from the consultation.
2. a detailed report (or reports) on:
- technical standards for controlling the use of works in AI training
- the impact of copyright on AI developers' access to and use of data
- developer disclosures about the use of protected content and how they access copyright works (such as through web crawlers)
- licensing arrangements for acts restricted by copyright
- possible enforcement/compliance mechanisms, including the role for a potential regulator
The government must provide a progress report on the impact assessment and report(s) within the next few weeks (December 2025).
A "comprehensive" AI Bill, which Peter Kyle (then Secretary of State for Science, Innovation and Technology) suggested in June 2025 would address the copyright issue is not expected until summer 2026 at the earliest. A cabinet reshuffle later, we wait to hear further about any particular legislative solution. The Culture Secretary, Lisa Nandy, made a commitment in September 2025 that the copyright issue would not be "kicked into the long grass".
She also acknowledged that the issue was "really difficult to solve" - a sentiment likely shared by many on both sides of the debate.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.