Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up hereto ensure you do not miss an update.

AI Intellectual Property Update:

  • Anthropic (the company behind the Claude AI model) has updated its terms of service to include indemnification for customers against copyright claims—including settlements: "we will defend our customers from any copyright infringement claim made against them for their authorized use of our services or their outputs, and we will pay for any approved settlements or judgments that result." The indemnification applies to Claude API customers and for those using Claude through Amazon Bedrock. Specifically, the revised terms of service state:
    • "Anthropic will defend Customer and its personnel, successors, and assigns from and against any Customer Claim (as defined below) and indemnify them for any judgment that a court of competent jurisdiction grants a third party on such Customer Claim or that an arbitrator awards a third party under any Anthropic-approved settlement of such Customer Claim. 'Customer Claim' means a third-party claim, suit, or proceeding alleging that Customer's paid use of the Services (which includes data Anthropic has used to train a model that is part of the Services) in accordance with these Terms or Outputs generated through such authorized use violates third-party patent, trade secret, trademark, or copyright rights."
  • Microsoft Copilot users can now create AI-generated custom songs with the integration of a new tool. Part of a new partnership with AI-music startup Suno, users can supply a simple prompt to generate a song, generally about a minute or two in length, along with a transcript of the lyrics.
  • YouTube has reiterated that it plans to require anyone uploading videos in the coming months to disclose if they used generative AI.

AI Litigation Update:

  • A group of 11 nonfiction authors have joined Sancton vs. OpenAI (S.D.N.Y. 1:23-cv-08292-SHS), one of the many cases where copyright owners have brought suit against AI developers for alleged misuse of their work in training generative AI tools.
    • As with the initial complaint, the amended complaint filed this week also names Microsoft, which has invested billions into OpenAI and integrated OpenAI's tools into its products, as a defendant.
    • The amended complaint has not altered the claims in the original complaint, and alleges both direct and contributory copyright infringement against OpenAI and Microsoft.

AI Policy Update—Federal:

  • The FTC released a staff report detailing "key takeaways" from an October 2023 roundtable that featured statements from artists and other creators on the effects of generative AI:
    • "Participants said that their work was being used to train and finetune generative AI models without their consent. Throughout the event, participants touched on different ways their work was being collected, either because it was publicly posted online by themselves or others, or because expansive interpretations of prior contractual agreements led others to make their art available to train AI. In addition, artists often produce work for hire and do not own the copyright on those creative works, further limiting their ability to control how their work is used. Participants said the nature of their work often leaves them without legal protection, and that the lack of transparency around data collection practices made it difficult for them to know when their works were being taken."
    • "Some AI developers have started offering people, including creative professionals, the choice to "opt-out" of their work being used to train future models, through methods such as direct opt-out forms, voluntarily complying with third-party lists, and public commitments to respect the Robots Exclusion Protocol. Participants raised multiple concerns about these kinds of opt-out frameworks, ranging from the practical, like not knowing whether their data was used and, thus, whether opt-out is even needed, to more fundamental issues with the approach, like shifting the burden from companies to creators. Participants also discussed the need for solutions that would not only limit the harm moving forward but also address the harm that has already occurred."
  • Responding to President Biden's recent Executive Order on AI, the National Institute of Standards and Technology has begun developing guidelines for evaluating AI, facilitating development of standards, and provide testing environments for evaluating AI systems. As part of this effort, the agency has released a Request for Information seeking input on generative AI risk management and reducing risks of AI-generated misinformation. Responses are due by February 2, 2024.

AI Policy Update—European Union:

  • The European Commission published a series of new calls for proposals for research projects related to AI and cybersecurity. Precisely, it released a dedicated budget of €84 million to support security operation centers with novel applications of AI and other enabling technologies, for the implementation of EU cybersecurity legislation and for the European transition to post-quantum cryptography.
  • The European Medicines Agency (EMA) released an AI workplan to guide the use of AI in medical regulation. The workplan sets out a collaborative and coordinated strategy with the aim to maximize the benefits of AI to stakeholders while managing risks.
  • The Netherlands' data protection authority called for more oversight and a comprehensive plan to manage the risks of generative AI in an end-of-the-year report. It proposed rolling out a full AI plan by 2030 to introduce more human control and increase everyday knowledge of how AI can impact lives.
  • The European Law Institute (ELI) published an Interim Report on EU Consumer Law and automated decision-making, which provides eight general principles that could guide the adaptation of existing EU consumer law to automated decision-making. These concern the attribution of a digital assistant's action to a consumer, the application of consumer law to algorithmic contracts, pre-contractual information duties, non-discrimination, disclosure duties, the protection of digital assistants from manipulation, the determination/disclosure of parameters in digital assistants, and conflicts of interest that could arise in this context.

International Update:

  • The Council of Europe is developing a Convention on AI, and released its second draft. This Convention aims to ensure the ethical development of AI considering human rights, fundamental freedoms, democracy and the rule of law while supporting innovation.
  • The Council of Europe also published guidelines on the responsible implementation of AI systems in journalism. They aim to provide practical guidance to news media organizations, states, technology providers and digital platforms that disseminate news, detailing how AI systems should be used to support the production of journalism.
  • The International Organization for Standardization (ISO) published a guidance document which specifies the requirements for establishing, implementing, maintaining and continually improving an AI management system within the context of an organization, in line with the ISO/IEC 42001:2023 standard.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.