The Netflix hit anthology series "Black Mirror" raises warning signals about how artificial intelligence might be used and highlights a number of interesting legal issues. Should we heed those warnings?

A recent "Black Mirror" episode "Joan Is Awful" depicts a woman who discovers that a global streaming platform called Streamberry has launched a TV drama that's an adaptation of her life.

Streamberry uses real-time data and information collected from Joan's phone, including texts, phone calls, and even conversations within "earshot" to create the show virtually in real-time with a quantum computer using generative artificial intelligence and computer-generated imagery.

Joan is portrayed by the likeness of Salma Hayek created by generative AI and computer-generated imagery. In other words, Streamberry creates the show using only generative AI and computer-generated imagery so viewers can follow Joan's life in real-time without the use of actual actors or sets.

As the title "Joan Is Awful"suggests, the portrayal of Joan's "real life" is unflattering, causing her to lose her job, her fiancée, and her friends. When she consults a lawyer to determine how to prohibit Streamberry's continued use of her personal information, she's advised the actions are legal due to Streamberry's terms and conditions, which allow for collection and exploitation of her customer information and likeness.

In a turn of events we will not spoil, the real-life Joan commits unseemly acts in an attempt to get the show canceled. She ultimately teams up with Hayek, whose "likeness" carries out the unseemly acts done by Joan on the show, to take down the quantum computer using generative AI and computer-generated imagery.

The episode gives the impression nothing can be done to stop AI, which is untrue. But the show raises several valid legal questions about AI, for which there are not yet clear answers.

A litany of lawsuits seeking clarity on some of these issues were recently filed, and President Joe Biden highlighted the urgent need for AI regulation with his recent executive order. Still, this area of law is in its infancy.

Existing laws concerning intellectual property rights, such as copyright, name and likeness, privacy rights, and user protection, could apply to the events described in "Joan Is Awful."

Name and Likeness

"Joan Is Awful" highlights the dangers of using someone's name and likeness in both private citizen and public figure contexts. Joan's personal life is exposed and exploited, wreaking havoc on her real life and revealing secrets and private information along the way.

At the same time, limits of the use of the likeness of Salma Hayek are explored. While right of publicity and other name and likeness concerns are generally (though not always) limited to the exploitation of public figures, the "Joan Is Awful" episode raises new concerns for private citizens whose name and likeness are exploited for streaming shows and other commercial uses.

AI Lawsuits

Copyright infringement suits have been the most common type of AI-related lawsuits to date. These have focused mainly on use of copyrighted works to "train" machine learning engines, which, in turn, use this learning to create outputs based on these works without permission and without crediting the copyrighted works or authors.

Generative AI replicates images, text, and more—including copyrighted content—from the data used to train it.

Examples of copyright lawsuits that have been filed to date include:

  • Several class actions on behalf of book authors led by authors Mona Awad, Paul Tremblay, and Sarah Silverman against two generative AI creators, OpenAI and Meta.
  • Microsoft, GitHub, and OpenAI are being sued in a class action brought by coders that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.
  • Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case alleging they infringed millions of artists' rights by training their tools on web-scraped images.
  • Stock image supplier Getty Images sued Stability AI for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI.
  • An AI tool used by CNET to write explanatory articles was found to have plagiarized articles written by humans—articles presumably swept up in its training dataset.
  • Several copyright applications were rejected by the US Copyright Office for lacking human authorship, raising the question of what level of human authorship is required.
  • The New York Times sued Microsoft and OpenAI for copyright infringement seeking billions of dollars in damages.

Courts will soon have to weigh in on the use of computers or quantum computers to replicate images and text using copyrighted material.

Privacy Concerns

The social commentary of "Joan Is Awful" focuses on AI-generated content based on data from parties that inadvertently consented (like Joan), and machine learning violating privacy legislation, such as HIPAA (medical data from Joan's therapist), and attorney-client privilege (conversation with Joan's lawyer).

One could argue this is precisely why some countries in the European Union—Italy, in particular—have temporarily banned ChatGPT due to lack of compliance with their landmark GDPR privacy legislation.

AI in the US isn't regulated and frequently raises privacy concerns and uses machine learning from data sets without consent from the creators of those data sets. This is likely to change, but it's unclear when and what rubric will be implemented.

As we move forward in this still-largely uncharted territory, will the US and other countries heed Netflix's AI warnings? Stay tuned.

Originally Published by Bloomberg Law

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.