Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Litigation Update:

  • OpenAI asked a federal judge to dismiss parts of the New York Times' copyright lawsuit against it, arguing that the newspaper "hacked" its chatbot ChatGPT and other AI systems to generate misleading evidence for the case. OpenAI said in a federal court filing that the Times caused the technology to reproduce its material through "deceptive prompts that blatantly violate OpenAI's terms of use."
  • Media outlets The Intercept Media Inc., Raw Story Media Inc., and Alternet Media Inc. filed a pair of lawsuits against OpenAI in Manhattan federal court on Wednesday alleging the firm removed author information from articles in data sets used to train its generative AI. Read The Intercept's complaint here. Read Raw Story and Alternet's complaint here.

AI Intellectual Property Update:

  • Researchers at Stanford University's Human-Centered AI published a paper that aimed at creating a more precise understanding of open source AI risks and benefits. Researchers found that the main benefits are distributing decision-making power, reducing market concentration, increasing innovation, accelerating science and enabling transparency.

AI Policy Update—U.S.:

  • The first section of the U.S. Copyright Office's highly anticipated artificial intelligence report is expected later this spring, and will offer the agency's most comprehensive policy view yet on deepfakes, according to a letter sent Friday to members of Congress. The first section will be followed by a chapter on the copyrightability of works incorporating AI-generated material. Later parts will analyze the legality of training AI models on copyrighted works.
  • Sen Mike Rounds (R-SD), one of four lawmakers on the Senate's bipartisan AI working group, said he and his colleagues plan to issue a report on possible AI legislation by the "end of March." Sen. Rounds said that the report will contain "some guidelines and some ideas" for the bevy of Senate committees expected to produce AI bills relevant to their policy sector.
  • The Connecticut Legislature is gaining prominence as a national leader as states rush to regulate artificial intelligence. Lawmakers introduced a sweeping AI bill earlier this month that would regulate deepfakes and impose requirements on AI developers (such as conducting impact assessments), among other things.
  • California lawmakers are grappling with a flood of ambitious bills this year that aim to safeguard against misuses of artificial intelligence as they balance such concerns with input from the tech industry, worker advocates, and others. At least 30 AI-related measures have been introduced in California this year. Major companies, from Microsoft to Meta, are lobbying over such measures.

AI Policy Update—European Union:

  • Microsoft struck a deal with French AI start-up Mistral as it seeks to broaden its involvement in the fast-growing industry beyond OpenAI. The partnership will include a research and development collaboration to build applications for governments across Europe and "use these AI models to address public sector-specific needs."
  • The European Commission plans to look into Microsoft's new partnership with French artificial intelligence company Mistral. The U.S. technology giant revealed that Mistral would get access to Microsoft's supercomputer to train and run its AI models, which would be made available to customers of Microsoft's cloud platform Azure.
  • The European Union Agency for Cybersecurity (ENISA) published a report titled Cyber Insurance - Models and methods and the use of AI. The report aims at introducing cyber risk and cyber insurance, providing an overview of existing research and modelling approaches, and identifying gaps for upcoming research projects.
  • Meta will activate an EU-specific Elections Operations Center in preparation for the June 2024 European Parliament elections. In this context, Meta will bring together experts from across the company who will be focusing on combating misinformation, tackling influence operations and countering risks related to the abuse of Generative AI Technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.