ARTICLE
13 November 2025

AI Models Trained Abroad Do Not Infringe When Deployed In The UK, Unless They Store Or Reproduce Copyright Works

AO
A&O Shearman

Contributor

A&O Shearman was formed in 2024 via the merger of two historic firms, Allen & Overy and Shearman & Sterling. With nearly 4,000 lawyers globally, we are equally fluent in English law, U.S. law and the laws of the world’s most dynamic markets. This combination creates a new kind of law firm, one built to achieve unparalleled outcomes for our clients on their most complex, multijurisdictional matters – everywhere in the world. A firm that advises at the forefront of the forces changing the current of global business and that is unrivalled in its global strength. Our clients benefit from the collective experience of teams who work with many of the world’s most influential companies and institutions, and have a history of precedent-setting innovations. Together our lawyers advise more than a third of NYSE-listed businesses, a fifth of the NASDAQ and a notable proportion of the London Stock Exchange, the Euronext, Euronext Paris and the Tokyo and Hong Kong Stock Exchanges.
Getty's remaining UK copyright claim against Stability AI has been dismissed by the High Court. The AI model and its weights were held not to be "infringing copies" imported...
United Kingdom Intellectual Property
A&O Shearman are most popular:
  • within Insurance, Real Estate and Construction and Consumer Protection topic(s)

Getty's remaining UK copyright claim against Stability AI has been dismissed by the High Court. The AI model and its weights were held not to be "infringing copies" imported into the UK on the basis that they never copied or stored any copyright work.

Significantly for application developers in the UK, this means that AI models trained outside the UK are not exposed to UK copyright law when deployed in the UK if they don't store or reproduce copyright works. This potentially collides with the analogous question being considered by data protection regulators in a privacy context as to whether model weights constitute personal data. There is no wide consensus as to what model weights represent.

Getty's other copyright claims were dropped during trial, so the court didn't need to rule on the legality of training AI systems in the UK and possible copyright infringement by AI-generated output.

There was some infringement of Getty's trade marks, but this is only of historical relevance because it was limited to earlier versions of Stable Diffusion.

Background

Getty filed an action in the UK (and an equivalent in the U.S.) alleging copyright, trade mark, and database rights infringement. Getty originally asserted that Stability AI unlawfully copied in the UK millions of its images, owned or represented by Getty Images, to train its Stable Diffusion image generator. It also claimed that there was further unauthorized copying or communication to the public in the UK of a substantial part of its images at the point of use, i.e., in the images output from Stable Diffusion. Both of these claims were, however, withdrawn by Getty on the first day of the trial's closing submissions, as explained further in our previous blogpost.

Secondary copyright infringement

The remaining copyright claim concerned Stability AI's deployment in the UK of Stable Diffusion, even though the model had been trained elsewhere. Under s22 of the Copyright, Designs and Patents Act (CDPA), copyright is infringed by a person who, without consent, imports into the UK (other than for private and domestic use) an article which is, and which they know or have reason to believe is, an infringing copy of the work. This allegation raised two legal issues:

1. Is Stable Diffusion an "article"?

The court confirmed that an "article" can include an intangible article like an AI model. Looking at the relevant context, the article needs to be an infringing copy and, under s17(2) CDPA, copying means "in any material form", including "storing the work in any medium by electronic means" e.g. an electronic copy stored in an intangible medium such as the AWS Cloud. Intangible means of storage had not been invented when the CDPA was enacted so they were not expressly provided for in s22. Nevertheless, secondary copyright protection should cover circumstances where the storage means have changed because of advances in technology.

2. Is Stable Diffusion (or its model weights) an infringing copy?

The claim failed because the various versions of Stable Diffusion had never included or comprised a reproduction of any copyright work. An article can't be an infringing copy if it has never consisted of, stored or contained a copy—there needs to be a copy of the work, which can be stored in any medium by electronic means.

It was noted that the court in Sony v. Ball had held that a RAM chip becomes an infringing copy when the act of reproduction occurs, but ceases to be an infringing copy when it no longer contains the copy. On the other hand, the court found that an AI model, which derives or results from a training process involving the exposure of model weights to infringing copies, is not itself an infringing copy. It is not enough that the time of making of the copies of the copyright works coincides with the making of the model.

The court also held that AI model weights are not infringing copies because they do not store an infringing copy of any copyright work. The model weights are altered during training by exposure to copyright works but, by the end of that process, the model itself does not store any of those works. The model weights are purely the product of the patterns and features which they have learnt over time during the training process.

Trade mark infringement

As a threshold issue, the judge looked at whether any UK user of any version of Stable Diffusion had ever seen a watermark incorporating any sign similar to Getty's marks. She found that watermarks were "non-trivial" in earlier (version 1) models, which were trained on images with watermarks, and generated them with high frequency because of memorisation and overfitting. There was real world evidence of UK users seeing watermarks for earlier versions but not for later models.

Stability had tried to argue that the user should be responsible for this. However, the court disagreed and thought that Stability had used the signs in its own commercial communication via the output image bearing a watermark generated by its model. This was more than merely storing the output images—it was offering the service of generating the images and putting them on the market. This involved active behaviour and control on the part of Stability AI: it trained the model, it made the model available to its consumers, and it could have filtered out watermarked images. The user was not responsible for conditioning the circumstances in which the outputs are generated. For later versions, Stability AI made deliberate choices on training datasets and filters and was responsible for model weights. This went beyond merely creating the technical conditions necessary for use of the sign. The use of the sign created the impression that there is a material link in the course of trade between Getty, the trade mark owner, and the goods concerned.

1. s10(1) Trade Marks Act (use of identical sign for identical goods)

There was s10(1) infringement of the iSTOCK mark by the earlier version 1 models accessed via the API and Dreamstudio, evidenced by the SpaceShip image generated by model version 1.2.

There was no evidence of a real-world user generating an image bearing a watermark identical to the GETTY IMAGES marks or any evidence of later models generating such signs.

2. s10(2) Trade Marks Act (use of similar sign for similar goods with a likelihood of confusion)

There was 10(2) infringement of the GETTY IMAGES mark, as evidenced by the generation by version 2.1 of a blurred/ distorted watermark in the First Japanese Garden image.

3. s10(3) Trade Marks Act (use of a sign which is detrimental to a mark with a reputation)

This claim failed. There was no plea nor real evidence of any change in economic behaviour because of tarnishing of the reputation of the Getty marks. It was also impossible to say how many images incorporated watermarks that may establish a link with Getty or tarnish the reputation of the Getty marks. This is a particular difficulty in cases like this when the signs and images on which they appear are always different.

Implications and key takeaways

  • The main takeaway of this judgment is that AI models trained outside the UK that don't store or reproduce copyright works are not exposed to UK copyright when deployed in the UK. This has implications for accountability along the AI value chain and will be of some comfort to application developers operating in the UK, who often build on top of AI foundation models trained outside the UK.
  • Key to this decision is the finding that AI model weights do not reproduce or store any copyright work but, rather, are a network of statistical weights and parameters that describe relationships and patterns within datasets rather than individual works. Some argue that this is based on a misunderstanding of model weights—if the model is capable of regurgitating training data, it follows that, at least functionally, this could constitute "storing" of a copyright work.
  • An analogous debate is being played out in a privacy context, with seemingly contradictory conclusions. For instance, the European Data Protection Board has stated that model weights are capable of being personal data because training data can be "absorbed" into them, as mathematical representations, such that the personal data is capable of being reproduced.
  • Nevertheless, the decision isn't a surprise. It was always unlikely that the judge would adopt an extreme interpretation for AI technology, particularly when there are wider copyright and AI policy debates currently ongoing in the UK. The court did acknowledge that it may be possible to escape UK copyright law by carrying out infringing acts abroad and importing the fruits of those acts into the UK. However, the scheme of UK copyright law is to provide protection to copyright owners against copying. Parliament did not intend for an AI model which is not an infringing copy to infringe—that falls outside the policy and object of the current legislation.
  • It is unfortunate that the judgment doesn't tackle the big questions on AI and copyright. There are still unanswered questions regarding the legality of training AI systems in the UK and the possible copyright infringement by AI-generated output. We do know, however, that a successful training claim needs evidence of training taking place in the UK. Blocks incorporated into the model to prevent user prompts that generate infringing outputs also make liability and scrutiny less likely. However, future claims may fare better if the relevant copyright works are particularly original and there is a clear link between the work and any output.
  • There remain at least 60 other ongoing AI-related copyright infringement lawsuits in progress globally, including parallel Getty v. Stability AI proceedings in the U.S., which will consider the more fundamental questions of primary copyright infringement. We are also yet to see any copyright lawsuits directed at outputs (i.e., allegations that deployers of AI systems are infringing copyright by virtue of an AI system generating an infringing copy of a work within the training data), which would place greater scrutiny on the question of whether model weights do indeed reproduce training data.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More