If you can't picture Nicki Minaj, Stormzy and Greta Thunberg as bickering neighbours, just tune into ITV's new show 'Deep Fake Neighbour Wars' and see for yourself!

Except that - as the name of the show suggests - you won't see the real Nicki, Stormzy and Greta. Rather, impersonators will be lending their voices to deepfakes of these (and other) public figures and celebrities.

When ITV announced the show, my first question was: won't lawsuits quickly follow? But the answer, as ever with deepfakes, is not so clear-cut.

What are deepfakes?

Deepfakes are photos, audio recordings or videos of a person that are digitally altered or synthetically generated using machine learning and artificial intelligence to make a person appear as someone else (in this case, to make the impersonator appear like the real Nicki Minaj) or to make someone (e.g. a politician or celebrity) appear as if they were saying or doing things that never actually happened.

In the past few years, not only has the quality of deepfakes increased exponentially, but the cost of creating them has also lowered, thereby democratising the medium. This has led to valid concerns around the malign use of deepfakes (e.g. in revenge porn, to spread disinformation, etc.).  

What is the position in the UK?

This depends on the type of deepfake at play.

The UK's Online Safety Bill, which is currently making its way through parliament after numerous amendments, envisages introducing criminal offences related to the sharing of non-consensual pornographic deepfakes.

However, when it comes to other forms of deepfakes, the legal landscape is not as clear. Let's assume Nicki Minaj does not in fact want to be associated with the TV show 'Deep Fake Neighbour Wars': what options would she have at her disposal?

Passing off? 

Given that she is a celebrity and therefore has a demonstrable brand ('goodwill'), Nicki might try to argue that ITV is leading its audience to believe that she is affiliated with or endorses the show. Likewise, if her deepfake is shown to promote a particular brand of drink (for example), she might seek to argue the same. However, given the disclaimers provided by ITV (and the name of the show itself) such an action is unlikely to succeed. 

Copyright infringement?

In order for an AI to create a deepfake, it needs to be trained on data relating to the person it is trying to synthetically generate. In the case of Nicki, that will include music, music videos, interviews, photos from her social media platforms, press photos, etc. 

Nicki is unlikely to personally own the copyright to all of her photos, music and music videos. More likely, the copyright owners will be record labels, photographers and the like. Unless they were on board with bringing a claim against ITV (which would require proving that the AI was indeed trained on their copyrighted data), Nicki herself is unlikely to have a valid cause of action. Not only that, but ITV could in any event argue that its use of copyrighted material constitutes 'fair dealing' as it falls within the parody, caricature and pastiche exemption.

In addition, ITV might argue that its deepfake of Nicki (and indeed the entire show) is protected as an artistic work in its own right, in which ITV (or perhaps the relevant production company) owns the copyright, and so itself has certain rights. 

Indeed, it's possible that if someone else edited segments of Nicki's deepfake on the show, without providing any context/disclaimer, and used those for a completely separate advertising campaign, Nicki Minaj and ITV might both seek redress from the rogue third party (for passing off and copyright infringement respectively)!

Defamation?

If Nicki can prove that the deepfake has caused her serious reputational harm (because, for example, on the show her deepfake violently fights Greta Thunberg's deepfake), she might be able to bring an action for defamation against ITV. 

However, given the comedic and light-hearted nature of the show (and the 'real' Nicki's conduct in the past, as widely reported by the press) the prospect of serious harm sounds unlikely.

Data protection?

Finally, Nicki might attempt to use data protection laws and laws protecting privacy (enshrined in the Data Protection Act 2018 and the UK GDPR), depending on the nature of what her deepfake on the show might say or do. For example, if the AI trained to create Nicki's deepfake had accessed her private messages, which in turn led ITV to script a line that revealed confidential or private information about her, Nicki might be able to bring a claim for misuse of private information. If, however, the AI was only trained on data that was publicly available (including on Nicki's public social media accounts), then ITV could easily argue that she couldn't have a reasonable expectation of privacy in that type of content. 

Have any other approaches been taken elsewhere in the world?

United States

Although the US has a patchwork of legislation (similar to the position under UK law) that may tackle deepfakes, certain US states have passed specific legislation targeting deepfakes. Virginia and New York, like the UK's proposal in the Online Safety Bill, has banned the sharing of non-consensual pornographic deepfakes. Texas and California have also banned the creation of deepfakes intended to influence an election (in California's case it's a ban on deepfake of politicians within 60 days of an election).

China

As reported by the New York Times, in January 2023 China adopted new rules which require providers of deepfake services to provide people with the option of 'refuting rumours', and for altered materials to contain watermarks / other identifiable characteristics, and the subject's consent.

What's next?

Aside from monitoring Nicki Minaj and Greta Thunberg's reactions to ITV's show, our eyes are fixed on the claim filed by Getty Images against Stability AI (the company behind 'Stable Diffusion', a generative AI art tool) here in the UK for alleged copyright violation. Getty Images argues that Stability AI unlawfully 'scraped' Getty's copyrighted images for the purposes of training Stable Diffusion. Stability AI's likely position is going to be that any such processing constitutes fair dealing but – if the case is to be tried in England – it will shape the legal landscape in the UK and perhaps lead to more deepfake-specific laws in the near-future.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.