Artificial Intelligence (AI) is a rapidly evolving facet of modern technology that has already had a significant impact on the way that we live our lives. The applications for AI systems seem endless, and there appears no end in sight to this innovation and its bounds.

As we have reported previously in this article, with the wide-spread release of AI applications like ChatGPT we are entering into an uncharted territory in respect to the relevant legal rules and principals that govern use of data in training and operating AI algorithms and generative outputs produced from AI applications. Several high profile legal proceedings have been commenced worldwide (with some determined) dealing principally with direct copyright infringement of data used to train AI systems,1 direct and indirect copyright infringement of AI generated outputs,2,3 and patent eligibility of AI generated content.4 Interested parties and commentators eagerly await many of these proceedings to be settled in Court to provide guidance on these evolving intellectual property issues and the application of current copyright laws and regulations in various jurisdictions.

However, in a US case filed on 27 December 2023,5 the New York Times has sued Open AI and Microsoft for copyright infringement over using content from the Times in their AI models and also have raised issues of trade mark infringement in the generative output and also dilution of the trademarks by associating them with inaccurate content. This new twist in IP law associated with trade mark infringement and dilution has not been raised previously to this authors knowledge – at least in Australian jurisprudence.

Briefly, the copyright issues in the complaint concern:

  • creating datasets for training and replicating extensive copies of The Times' works;
  • training the GPT model by mapping and allowing the model to recall and reproduce specific pieces of information, phrases, or even larger text segments that it has encountered during training in a process called "memorisation";
  • storing, processing and reproducing the Times' works, which have been "memorized" leading to unauthorised reproduction;
  • disseminating generative output containing copies and derivatives of the Times' works;
  • Microsoft's liability for assisting and contributing to the infringement by providing infrastructure and a range of other services and software;
  • OpenAI and Microsoft's liability in contributing to end users infringement in generating output;
  • and whether the US "fair use" defence6 applies in serving a new "transformative" purpose.

However, it is the unauthorised use of trade marks in the generative output and also dilution of the trademarks by associating them with inaccurate content that is the focus of this article.

The New York Times is claiming that the potential for AI chatbots to "hallucinate" and thus potentially attribute incorrect information to the New York Times as a source material is trade mark infringement and threatens high-quality journalism.

The complaint states that:

ChatGPT defines a "hallucination" as "the phenomenon of a machine, such as a chatbot, generating seemingly realistic sensory experiences that do not correspond to any realworld input." Instead of saying, "I don't know," Defendants' GPT models will confidently provide information that is, at best, not quite accurate and, at worst, demonstrably (but not recognizably) false. And human reviewers find it very difficult to distinguish "hallucinations" from truthful output."

Reminiscent to the HAL 9000 computer in Stanley Kubrick's 1968 epic science fiction film: 2001: A Space Odyssey!

By way of an example, The Times complaint alleges that:

In response to a query asking for New York Times articles about the Covid-19 Pandemic, ChatGPT's API returned a response with fabricated article titles and hyperlinks that purport to have been published by The Times. The Times never published articles with these titles, and the hyperlinks do not point to a live website.

It is claimed that these "hallucinations" mislead users as to the source of the information they are obtaining, leading them to incorrectly believe that the information provided has been vetted and published by The Times. That "hallucination" potentially dilutes the repute (accuracy) of the New Times brand and may "undermine and damage The Times's relationship with its readers and deprive The Times of subscription, licensing, advertising, and affiliate revenue."

Interestingly, the concept of dilution7 in a strict sense is arguably not recognised under Australian trade mark law. Indeed, it has been argued that Article 16.3 of TRIPS8 does not oblige WTO members to provide for anti-dilution protection at all.9 Conversely, other commentators have argued that trade mark law in Australia covers the use of well-known trade marks that will likely cause consumers to infer a connection between unrelated goods or services to the detriment of the original trade mark owner under section 120(3) of the Trade Marks Act 1995 (Cth). However, the High Court's obiter in Campomar10 appears to have drawn a distinction between dilution as a type of harm which can arise in the presence or absence of confusion, and specific anti-dilution measures whose operation does not depend on the existence or otherwise of confusion.

The author also questions whether the moral rights of the author of each Times' work may also be infringed by false attribution and the right to ensure that his/her work is not subjected to derogatory treatment for acts in relation to the work that is harmful to the author's honour or reputation. It would also seem that a claim for a breach of sections 18 and/or 29 of the Australia Consumer Law might also be open should these circumstances arise in Australian litigation.

Conclusion

As AI continues to evolve, our legal understanding and regulations must adapt to a landscape where digital memory and human creativity intersect in unprecedented ways. The resolution of the Times' claim and several other cases currently before the courts over similar concerns will set precedents which will shape the future of AI and its role in our digital society.

Such jurisprudence may also guide future legislative changes which may be required:

  • to clarify the law of dilution in respect of such trade mark "hallucination" cases in Australia similar to the US Federal Trademark Dilution Act of 1995 and Trademark Dilution Revision Act of 2006;
  • to confirm whether the business model of training the AI algorithm on publicly available information with "memorisation" using "vector embedding"11 without authorisation is copyright infringement; and
  • whether Australia's "fair dealing" exemption requires widening akin to "fair use" in US jurisprudence.

Footnotes

1 J.Doe et.al. v Github, Inc, Microsoft Corporation and OpenAI GP, LLC et al, 22 Civ. 6823 (N.D. Cal. Nov. 10, 2022) at https://githubcopilotlitigation.com/pdf/06823/1-0-github_complaint.pdf.

2 Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).

3 www.copyright.gov/docs/zarya-of-the-dawn.pdf.

4 Commissioner of Patents v Thaler [2022] FCAFC 62.

5 The New York Times Company v. Microsoft Corporation (1:23-cv-11195) District Court, S.D. New York and the complaint at https://storage.courtlistener.com/recap/gov.uscourts.nysd.612697/gov.uscourts.nysd.612697.1.0.pdf.

6 In Australia, there is no direct comparable to the US "fair use" defence and the Australian "fair dealing" exceptions have quite limited application.

7 Trade mark dilution, traditionally understood, refers to the use of a mark that results in the impairment of its distinctive quality or damage to its reputation through the creation of negative associations. The focus of anti-dilution law is on conduct that causes harm to the mark itself: the aim is to protect the mark's inherent "selling power" as distinct from its ability to guarantee the trade origin of particular goods or services.

8 Agreement on Trade Related Aspects of Intellectual Property Rights between all the member nations of the World Trade Organization.

9 Handler, Michael — "Trade Mark Dilution in Australia?" [2008] UNSWLRS 48.

10 Campomar Sociedad Limitada v Nike International Ltd (2000) 202 CLR 45.

11 Vector embedding is a numerical representation of data that captures semantic relationships and similarities, making it possible to perform mathematical operations and comparisons on the data for various tasks like text analysis and recommendation systems in machine learning algorithms.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.