Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • Apple has reportedly opened negotiations with major news and publishing organizations (including Conde Nast, NBC News and IAC), seeking permission to use their material in the company's development of generative AI. Apple has allegedly offered multiyear deals worth at least $50 million to license archived content. "Several publishing executives were concerned that Apple's terms were too expansive . . . The initial pitch covered broad licensing of publishers' archives of published content, with publishers potentially on the hook for any legal liabilities that could stem from Apple's use of their content."
  • Microsoft's Copilot (formerly Bing Chat) is now available as an iOS app. The app allows users to ask questions, draft emails, and summarize text, as well as create images through an integration with the text-to-image generator DALLE-3.

AI Litigation Update:

  • The New York Times has filed a complaint against OpenAI and Microsoft (S.D.N.Y. 1:23-cv-11195):
    • The complaint alleges that OpenAI and Microsoft's AI tools "can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style," which "undermine[s] and damage[s]" the Times' relationship with readers, while also depriving it of "subscription, licensing, advertising, and affiliate revenue."
    • The complaint also argues that AI models "threaten high-quality journalism" by hurting the ability of news outlets to protect and monetize content: "Defendants seek to free-ride on The Times's massive investment in its journalism by using it to build substitutive products without permission or payment."
    • In addition to "billions of dollars in statutory and actual damages" the complaint also requests that the court prevent OpenAI and Microsoft from training their AI models using New York Times content, as well as require removal of the Times' work from the companies' datasets.
    • The complaint states that the Times had attempted to reach a "negotiated agreement" with OpenAI "to ensure it received fair value for the use of its content, facilitate the continuation of a healthy news ecosystem, and help develop GenAI technology in a responsible way that benefits society and supports a well-informed public" before bringing suit.
  • An amicus brief filed with the Supreme Court in Relentless, Inc. v. Department of Commerce (22-1219) argues that the preservation ofChevrondeference is vital for the future of regulating AI:
    • The brief, submitted by the Emory Law and Artificial Intelligence Society, asserts that the need for Chevron is "especially clear" in the context of AI regulation: A "predictable regulatory framework" is critical "for keeping innovators in the field and ensuring their innovations positively serve humanity."
    • To both protect the public and foster new technology, companies "must be able to trust agency regulations," said the brief. That trust "would be undermined with the elimination of Chevron deference," because agency deference under Chevron "allows for consistency among courts throughout the country and lays the foundation for the stability and clarity necessary to regulate emerging technologies."

AI Policy Update—Federal:

  • In the Supreme Court's 2023 Year-End Report on the Federal Judiciary, Chief Justice Roberts urged "caution and humility" as AI transforms the legal field.
    • The report predicts that "that judicial work—particularly at the trial level—will be significantly affected by AI." While "legal research may soon be unimaginable without" AI, the Chief Justice notes that the technology "risks invading privacy interests and dehumanizing the law."
    • Courts must "consider [AI's] proper uses in litigation" while understanding that "machines cannot fully replace key actors in court." For instance, Chief Justice Roberts acknowledges that use of AI in the legal system made headlines this year "for a shortcoming known as 'hallucination,' which caused the lawyers using the application to submit briefs with citations to non-existent cases. (Always a bad idea.)"

AI Policy Update—European Union:

  • European Commissioner for Competition Margrethe Vestager defended the proposed EU AI Act, saying that it will give legal certainty to companies that build foundation models. Vestager's defense of the EU AI Act came after the President of France, Emmanuel Macron argued that the EU AI Act risks leaving European tech companies lagging behind those based in the US and China.
  • Euractiv reports that French senators criticized the French government's stance in the EU AI Act negotiations, particularly related to copyright protection.
  • The European Parliamentary Research Service published a briefing about generative AI and watermarking. The briefing provides insights on the need for transparency of generative AI and AI watermarking techniques.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.