AI Licensing Update:

  • The issue of AI-generated music is complicating negotiations between TikTok and Universal Music Licensing (UML). In an open letter, UML wrote: "TikTok is allowing the platform to be flooded with AI-generated recordings—as well as developing tools to enable, promote and encourage AI music creation on the platform itself – and then demanding a contractual right which would allow this content to massively dilute the royalty pool for human artists, in a move that is nothing short of sponsoring artist replacement by AI."
  • News Corp. is in "advanced discussions" with companies to license its content for their AI engines, and anticipates that this will bring in "significant revenue."

AI Intellectual Property Update:

  • The House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet held a hearing last week titled, "Artificial Intelligence and Intellectual Property: Part II – Identity in the Age of AI." The Subcommittee heard from artists and creators about how Congress can support responsible innovation in applications of AI technology and address growing concerns about the misuse of AI technology.
  • OpenAI's image generator DALL-E 3 will add both watermarks and metadata to its generated images that comply with the C2PA standard (Coalition for Content Provenance and Authenticity). This will "indicate the image was generated through our API or ChatGPT unless the metadata has been removed." Individuals can check which AI tool was used to make the content of any image generated by OpenAI's platforms through websites like Content Credentials Verify.

AI Policy Update—Federal

  • The Commerce Department announced "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC will be "developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content." Members include BSA | The Software Alliance, Scale AI, Pfizer, University Maryland, the State of Kansas, and New York Public Library.
  • At the Information Technology Industry Council's tech policy summit this week, House Science ranking member Zoe Lofgren (D-CA) explained that Congress is unlikely to pass AI legislation any time soon and that the private sector will have to self-regulate until then. She said although the Science committee is working on legislation, it will not be a sprawling regulatory scheme to address the wide range of AI concerns.
  • Google is retiring the name "Google Bard" and rebranding Bard as Gemini, the name of its family of foundation models. Google is also launching Gemini Ultra, its "most capable" LLM yet. Gemini Advanced can be a personal tutor – creating step-by-step instructions, sample quizzes or back-and-forth discussions tailored to an individual's learning style, can assist with advanced coding scenarios, and can help digital creators generate fresh content.

AI Policy Update—European Union:

  • On February 2, 2024, the Committee of Permanent Representatives (COREPER) of the Council of the European Union voted unanimously in favor of the approval of the proposed AI Act. While this is an important step, the COREPER's vote does not conclude the legislative process for the adoption of the proposed AI Act. On the one hand, the Council of the EU (at the ministry level) will now have to formally approve the proposed AI Act. On the other hand, the responsible committees (IMCO and LIBE) of the European Parliament will also need to give their green light for the proposed text to be presented to the plenary of the European Parliament. The vote of the responsible committees of the European Parliament should likely take place on February 12, 2024. If the committees' vote is positive, the plenary session for the formal approval of the EU AI Act by the European Parliament should likely take place prior to the elections in spring 2024. Following the COREPER's vote, the Council of the European Union published an updated version of the proposed EU AI Act.
  • The European Commission has opened a feedback period for the proposal for an EU regulation regarding a European High Performance Computing initiative for start-ups. The initiative aims to boost European leadership in trustworthy AI. Interested stakeholders can submit their feedback until March 26, 2024.
  • The Italian Data Protection Authority imposed a fine of €50,000 on the local government of the city of Trento, northern Italy, for having conducted two scientific research projects, involving the deployment of AI systems which did not comply with the EU's General Data Protection Regulation (GDPR). The AI systems collected information in public places using microphones and cameras to detect potential safety threats. The regulator said that the processing of the personal data at hand was illegal, violating the EU GDPR's transparency and lawfulness principles.
  • The Italian Data Protection Authority shared its initial findings on a preliminary investigation of OpenAI's ChatGPT service, stipulating that OpenAI has committed one or more breaches of the EU GDPR.
  • The French Competition Authority is asking for feedback by March 22nd for an opinion on potential competition issues with generative AI. The opinion will examine "strategies implemented by major digital players" that could allow them to use their existing market power elsewhere to expand into AI. The inquiry will also look at Big Tech investments in AI companies, citing Microsoft's partnership with OpenAI and Amazon and Google investments in Anthropic.

AI Policy Update—International:

  • OpenAI, Google, Microsoft, and Meta are pushing the UK Safety Institute for clarity over the tests the Institute is conducting, how long they will take, and what the feedback process is if any risks are found. The Institute has begun testing existing models and has access to unreleased models, such as Google's Gemini Ultra.
  • Britain pledged to invest more than 100 million pounds ($125 million) to launch nine new AI research hubs and train regulators about the technology. About 90 million pounds would go towards the research hubs, which will focus on using AI in healthcare, chemistry and mathematics, and a partnership with the United States on responsible AI. Another 10 million pounds would help regulators address the risks and harness the opportunities of AI.
  • UK's Department for Science, Innovation and Technology published a report on generative AI in response to a consultation that closed in June 2023. Regarding the AI copyright issue, the report found: "Our approach will need to be underpinned by trust and transparency between parties, with greater transparency from AI developers in relation to data inputs and the attribution of outputs having an important role to play. Our work will therefore also include exploring mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into AI models. The government wants to work closely with rights holders and AI developers to deliver this."
  • The UK's Intellectual Property Office (IPO) was unable to agree on a voluntary code of practice for use of copyrighted material in generative AI. The office "had been due to publish a code of conduct by the end of summer last year to clarify the protection of rights holders and guidance for working with tech groups as well as compensation."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.