ARTICLE
13 February 2025

Generative AI And Deepfakes

MC
Marks & Clerk

Contributor

Marks & Clerk is one of the UK’s foremost firms of Patent and Trade Mark Attorneys. Our attorneys and solicitors are wired directly into the UK’s leading business and innovation economies. Alongside this we have offices in 9 international locations covering the EU, Canada and Asia, meaning we offer clients the best possible service locally, nationally and internationally.
The ASA has the power to challenge any advertisements that contain misleading content. In terms of deepfakes...
United Kingdom Intellectual Property

In the fourth of a series of articles looking at how Generative AI is impacting trade marks and designs, we focus on deepfake technology - the behaviours surrounding the technology that are prohibited and the actions that can be taken to challenge it.

Deepfake technology is rapidly emerging as AI's latest 'Pandora's box'. No longer limited to producing meme-worthy parodies of celebrities and politicians, we are now seeing generative AI being actively weaponised, from misleading political deepfakes and clickbait celebrity advertisements, to school children deep-faking explicit pictures of classmates.

As the capabilities of AI technology race ahead of regulation, public concerns are growing of the threat posed by deepfakes. So, what behaviours around deepfake technologies are currently prohibited, and what action can be taken to challenge deepfakes?

It is of course recognised that similarities exist in nature – there's the old adage that seven people in the world look like you – but at what point does the law step in to control the use of such similarities, and where can businesses encounter difficulties as a result?

A useful example is the 2019 ruling against vape company Diamond Mist, who published an advertisement comprising imagery of a male model bearing a close resemblance to the athlete Mo Farah, alongside the strapline "Mo's mad for menthol".

Mo Farah took to Twitter to complain about the potential confusion, concerned that people would think that he had endorsed the product. The Advertising Standards Agency (ASA) subsequently ruled that the advertisement did indeed give a misleading impression: while 'Mo' is a common moniker, the model's head and eyebrows were reminiscent of the athlete such that viewers would associate the advertisement with Mo Farah, thereby implying that he had endorsed the product.

While the image in this example had not been generated by AI, it does provide a useful illustration of the dangers of using AI generated deepfake images – if the image generated is sufficiently misleading to confuse the public, then problems may arise.

Consequently, businesses should carefully consider the adoption of any AI generated imagery and audio-visual content, to ensure that they are not exposing themselves to liability - just because the generative AI photograph or video chosen for a marketing campaign appears to be generic, does not necessarily mean that it is available for use.

Current law around deepfakes

To date, there is no single piece of legislation within the UK that provides blanket protection against deepfakes. Instead, individuals are protected under an assortment of laws and regulations depending on the nature of the deepfake. Some of the most common are set out below:

Online Safety Act

The Online Safety Act has one main provision against deepfakes. While it has been illegal to share intimate or explicit images of someone without their consent since 2015, the Online Safety Act has compounded this ruling to make it illegal to also share intimate AI-generated images of someone without their consent. Crucially, unlike the ruling about genuine intimate content, it is not necessary to prove that the creator intended to cause distress in the case of deepfake imagery, although it is considered a further serious offence if a sexual intention can be demonstrated.

It is important to note that this ruling does not criminalise the creation of an explicit deepfake, only the sharing. The Online Safety Act is also primarily focused on removing offensive content; many are concerned that these provisions will prove ineffective, with the creation of intimate deepfakes continuing to be unregulated and perpetrators escaping punishment.

Advertising Standards Agency

The ASA has the power to challenge any advertisements that contain misleading content. In terms of deepfakes, this mostly arises in the case of misleading advertisements containing deepfakecelebrity endorsements.

Most businesses know not to use celebrity endorsements without first securing the consent of the celebrity, due to those celebrities commonly owning trade mark and other forms of IP protection for their name, likeness, voice and gestures. In turn, the use of an AI generated deepfake advertisement, claiming to be an endorsement from a celebrity, without first obtaining consent from that celebrity may well mislead the public and attract the attention of the ASA.

Furthermore, as highlighted by the Mo Farah case above, a likeness does not need to be identical to be objectionable, it just needs to confuse the viewer.

There is therefore a danger that businesses could fall foul of the ASA by using generative-AI output that is similar enough to real-life celebrities to cause confusion.

Civil Law

Civil law provides a number of ways in which individuals may challenge the use of deepfakes. While there is no specific legislation addressing deepfakes, individuals may be able to rely on the following:

  • Trade mark infringement: if an individual (usually a celebrity) has acquired registered trade mark protection for their name, likeness, voice or gestures, then unauthorised use of those features could amount to trade mark infringement. This cause of action does require the infringing use to be in the course of trade – as such, any advertisements using a deepfake likeness of the trade mark owner may be actionable, but any non-commercial use of the deepfake likeness would escape liability.
  • Passing off: again, if an individual has acquired a reputation for endorsing the products or services of other companies (for instance, George Clooney's endorsement of Nespresso), then the use of deepfakes to suggest endorsement by that individual could give rise to a legal claim for passing off (similar to unregistered trade mark infringement). Again, such unauthorised use of the deepfake likeness would need to be in the course of trade for liability to arise.
  • Copyright infringement: the use of photographs, voice recordings or video recordings in generating the deepfake likeness might amount to infringement of the copyright subsisting in those original materials. Hence, use of a deepfake may be actionable by the copyright owner of those original materials, even if such infringing use is not in the course of trade.
  • Privacy: a deepfake could be considered a violation of one's right to privacy, especially if it is possible to show that the creator used personal data to create the deepfake, which is protected under UK GDPR and the Data Protection Act 2018.
  • Harassment: the use of multiple deepfakes with the intent to cause alarm or distress could form the basis of a harassment suit.
  • Defamation: if use of a deepfake has an adverse effect on one's reputation by portraying an individual in a false or damaging way, there is the potential for a defamation case.

Whilst the lack of a single piece of legislation governing the creation and use of deepfakes does mean that taking action against deepfakes can be complicated, the above comments show that there are numerous ways in which deepfakes can be challenged. As such, businesses considering the use of AI generated imagery and audio-visual content should take legal advice before commencing the use of such materials.

Equally, any individuals who have been the subject of deepfakes should also take legal advice as to whether any legal action may be appropriate to challenge the unauthorised use of their likeness or other attributes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More