ARTICLE
23 November 2023

AI Legal & Regulatory News Update—Week Of 11/19/23

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU.
Worldwide Technology

Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • The SAG-AFTRA strike has ended, with union members set to vote on a proposed contract. The tentative agreement's provisions on AI state that if a producer plans to make a computer-generated character that has a main facial feature that clearly looks like a real actor (and use of the actor's name and face to prompt the AI), the producer must first get permission from the actor. The agreement also requires that performers are compensated for the creation and use of any digital replicas of the performer.
  • Adobe is working on a new AI-powered audio tool designed to break apart different layers of sound within a single recording. Called "Project Sound Lift," the tool can automatically detect each sound and spit out separate files containing the background noise and the track users want to prioritize, such as someone's voice or the sound of an instrument.
  • YouTube plans to adopt new disclosure requirements and content labels for content created by generative AI. Starting next year, the video platform will "require creators to disclose when they've created altered or synthetic content that is realistic . . . For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do." Penalties for not labeling AI-generated content could include takedowns and demonetization.
  • Some of Bing's search results now have AI-generated descriptions, according to a blog post from Microsoft. The company will use GPT-4 to garner "the most pertinent insights" from webpages and write summaries beneath Bing search results, and users can check which search result summaries are AI-generated.

AI Litigation Update

  • Music publishers that sued AI company Anthropic last month (M.D. Tenn. No. 3:23-cv-01092) have asked the court to issue a preliminary injunction that would prevent Anthropic from reproducing or distributing copyrighted song lyrics. The music publishers argue that Anthropic's use of their copyrighted works is not fair use and that publishers and their songwriters will suffer irreparable harm absent an injunction. The proposed injunction specifically requests that Anthropic (1) be ordered "to implement effective guardrails that prevent its current AI models from generating output that disseminates, in full or in part, the lyrics to compositions owned or controlled by Publishers" and (2) is prohibited from using unauthorized copies to train future models.

AI Policy Update—Federal

  • The FCC has approved an open-ended notice of inquiry that asks how AI can fight robocalls, as well as inquiring into potential risks from the technology. "Responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplify the risk they face in an increasingly digital landscape," said Commissioner Anna Gomez.

AI Policy Update—European Union:

  • The EU AI Act is at the last phase of the EU legislative process, the so-called trilogue negotiations, where the European Parliament, the Council of the European Union and the European Commission negotiate final text of the proposed AI Act:
    • Last week, France, Germany and Italy pushed against any type of regulation for foundation models. This week, Reuters reports that France, Germany and Italy have reached an agreement on how AI should be regulated. The three governments support "mandatory self-regulation through codes of conduct" for foundation models. However, the governments oppose "un-tested norms." This agreement could accelerate negotiations at the EU level, and allow for a political agreement on the EU AI Act at the beginning of December.
    • Euractiv reports that the Members of the European Parliament involved in the EU AI Act will discuss the governance aspect of the legislation on November 21.
  • The European Data Protection Supervisor, published a TechDispatch where it discusses the issue of "explainable" Artificial Intelligence, and stated that "it is therefore unacceptable to have a 'black box' effect that hides the underlying logic of decisions made by AI."
  • The European Commission and the European High-Performance Computing Joint Undertaking have committed to open and widen access to the EU's supercomputing resources for European AI start-ups, SMEs and the broader Artificial Intelligence community as part of the EU AI start-up initiative.
  • The Italian Data Protection Authority issued Guidelines on the use of AI systems for provision of national healthcare services. The Guidelines focus on ten personal data protection principles enshrined in the EU's General Data Protection Regulation, EU Member States' and EU case law. The Guidelines focus on the Italian healthcare services but they can provide useful insights for the use of AI in healthcare in general.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More