ARTICLE
7 December 2023

AI Legal & Regulatory News Update—Week Of 11/26/23

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.
Worldwide Technology

Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

Politico reports that Microsoft has told the UK Intellectual Property Office that "British law doesn't require it to obtain a license from copyright holders to train AI models on their creations."

  • "Other AI firms such as Meta and Stability AI have also raised concerns over a licensing regime and called for the law to be amended — but did not go as far as Microsoft in stating that U.K. law already permits such activity. Unlike the U.S., U.K. copyright law does not include a broad 'fair use' carve-out."
  • Microsoft previously wrote in comments to the UK House of Lords: "Large scale AI models require access to data at scale in order to function correctly. Limiting the ability to use publicly available and otherwise legally accessed data for training AI models will lead to poorly performing AI, which is potentially unsafe, unethical and biased. It is therefore important that existing exceptions in copyright law clearly permit the training of AI systems, and intellectual property laws do not develop to prevent text and data mining. Performing text and data mining is not a copyright infringement and performing text and data mining on publicly available and legally accessed works should not require a licence. If licensing was required to train AI models on legally accessed data, this would be prohibitive and could shut down development of large scale AI models in the UK."

AI Litigation Update:

  • A federal judge dismissed all claims in the Sarah Silverman copyright lawsuit against meta (23-cv-03417-VC, N.D. Ca.). Note that Meta did not move to dismiss the claim alleging that unauthorized copying of the plaintiffs' books for purposes of training the LLaMA model constitutes copyright infringement.
    • The court called the plaintiff's argument that the LLM is an infringing derivative work because the model cannot function without the information extracted from the plaintiffs' books "nonsensical" because "there is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books."
    • The court also rejected the theory that every output of the model is an infringing derivative work because "the complaint offers no allegation of the contents of any output, let alone of one that could be understood as recasting, transforming, or adapting the plaintiffs' books. Without any plausible allegation of an infringing output, there can be no vicarious infringement . . . To the extent that they are not contending LLaMa spits out actual copies of their protected works, they would need to prove that the outputs (or portions of the outputs) are similar enough to the plaintiffs' books to be infringing derivative works. And because the plaintiffs would ultimately need to prove this, they must adequately allege it at the pleading stage."
  • A new lawsuit by author Julian Sancton was filed against OpenAI, this time also adding Microsoft as a plaintiff.

AI Update—Federal

  • Showing its heightened interest in this area, the FTC voted to approve a resolution allowing the agency to more easily obtain information in investigations relating to AI. Specifically, the FTC's staff is authorized to issue civil investigative demands (CIDs) in non-public investigations involving products or services that use AI.
  • Senator Schumer hosted the seventh closed-door "AI Insight Forum," this time focusing on AI transparency and intellectual property.
    • Schumer spoke with reporters after the forum and outlined the following areas of consensus:
      • Copyright: the need to create and enforce protections for creators, including AI transparency.
      • Intellectual property: There is a role for the federal government to play in protecting companies' IP.
      • Transparency: expand on Pres. Biden's executive order to build more robust disclosure requirements for AI systems.
      • Explainability: understand why AI systems produce the answers they do, especially for high impact areas like finance, healthcare, and criminal justice.

AI Policy Update—European Union:

  • The EU AI Act is continuing through the so-called trilogue negotiations:
    • Euractiv reports that members of the European Parliament circulated a working paper detailing their proposed approach to a series of binding obligations for providers of foundation models that pose a systemic risk. These obligations involve internal evaluation and testing, including red-team assessment, cybersecurity measures, technical documentation and energy-efficiency standards.
    • Tech industry associations oppose the EU AI Act's proposed regulation of foundation models. The associations state that any measurement to establish whether an AI model is "highly capable" or "high impact" should be based on a "thorough assessment" and a "consultation with AI industry, academic and civil society experts." They are also concerned about proposals to introduce additional requirements for the use of copyrighted data to train AI systems.
    • Tech industry associations also warn the EU against over-regulating foundation models in the EU AI Act and the risk of misalignment with existing sectorial legislation.
  • The European Commission introduced the AI Pact, encouraging companies to voluntarily commit to implementing measures outlined in the EU AI Act before the legal deadline. The AI Pact aims to create a community of key EU and non-EU industry players to exchange best practices with the aim of increasing awareness of the future EU AI Act principles. Interested organizations can express their interest in participating, with the formal launch of the AI Pact to be expected after the EU AI Act's adoption.
  • The Italian Data Protection Authority has launched a fact-finding investigation to scrutinize the data collection methods used in AI training, with a specific focus on whether websites adopt security measures to prevent data scraping of personal data for the purpose of AI training. The investigation will concern all data controllers who operate in Italy and make their users' personal data available online. The public consultation will last 60 days.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More