ARTICLE
21 July 2025

Generative AI And Trademarks: The Need For Legislative Intervention

Ownership is the cornerstone of intellectual property rights (IPR). It provides legal protection to creators, ensuring that they can derive proprietary benefits from their creations and safeguard...
India Intellectual Property

The Context

Ownership is the cornerstone of intellectual property rights (IPR). It provides legal protection to creators, ensuring that they can derive proprietary benefits from their creations and safeguard their efforts and interest. This traditional framework of ownership is facing new trials owing to the advent of artificial intelligence (AI). AI has fundamentally altered the way intellectual property is created, distributed, and consumed. Unlike physical marketplaces, the digital realm is inherently transnational. With AI's expanding generative capabilities, AI has now entered creative domains, including composing music, designing visuals and creating brand identities. The involvement of AI in generating trademarks complicates traditional notions of authorship and ownership. A single AI-generated logo uploaded in one country can be accessed, replicated, or commercialized globally within seconds. Traditional intellectual property laws, rooted in state-based legal systems and territorial jurisdiction, struggle to keep up to this fluid digital environment. The article investigates the difficulties in enforcing trademark rights against infringing acts involving AI-generated content.

AI generated Trademarks: The Architecture

AI tools are being employed to autonomously generate logos, brand names, slogans, and other trade identifiers. Trademark law, unlike copyright or patent law, does not ordinarily concern itself with questions of authorship; rather, its focus is on commercial source identification and the accrual of goodwill. The regulatory framework surrounding trademarks presumes a direct link between the mark and its human or juristic author, whose intention, reputation, and commercial goodwill imbue the mark with distinctiveness and legal protectability.

However, such a presumption may be disrupted when dealing with AI generated marks. AI-generated trademarks are typically produced by systems trained on large datasets containing existing logos, brand names, design elements, colour palettes, and typographic styles. When prompted with specific input commands, these systems draw from their training data to generate trademark options. The problem is that these databases borrow from publicly available or scraped information, some of which includes copyrighted or trademarked material. Thus, when an algorithm is provided with certain descriptors to design a brand identity, the generative process is derivative by design. The outputs are thereby shaped by the structure and biases of the training data.

  • The Can of Worms

This creates two major risks in the context of trademark law. First, there is a high likelihood of the generated mark imitating pre-existing marks, especially when the training data contains well-known brand identifiers. AI does not inherently recognize legal boundaries, such as protected trademarks or similarity under trademark jurisprudence. Secondly, because these models tend to prioritize statistically probable outputs or those requested in frequency, they are prone to generating repetitive or highly similar trademarks in response to a limited set of inputs.

Traditional mechanisms of enforcement, premised on territoriality and identifiable infringers are largely insufficient to handle such cases. Several questions remain unanswered for situations that are widely transborder. For example, what happens when a user in India uses an AI platform hosted in Europe to generate a mark, which is then used on a website accessible worldwide. If that mark incidentally resembles a registered US trademark, who is the liability assigned to? How should the digital platform which is hosting the infringing mark combat this?

Even when such infringement is detected, the regulatory tests might not suffice. One of the parameters often material to the assessment of damages or secondary liability is intention and knowledge. If a generative AI model autonomously produces a logo resembling an existing mark, in the absence of any human intervention or malicious intent, who then is liable? Secondly, AI systems trained on massive, structured sets of texts or data scraped from the internet may inadvertently ingest registered marks, only to later recombine them into derivative outputs. This challenges the principle of independent creation, which can be used as a defence in infringement proceedings. Answering whether there is any subconscious linking of the derivative mark with the consumers is tricky. Lastly, enforcement becomes practically impossible when the infringing use is disseminated through decentralized platforms or by anonymous users, especially in jurisdictions with weak or inconsistent IP enforcement.

These challenges posed by AI-generated marks are no longer speculative but rather concrete issues emerging from the widespread use of AI in creative fields, The existing legal framework is ill-equipped to address these concerns. The growing gap between traditional IP framework and the challenges posed by AI-generated marks underscores the need for targeted statutory intervention to ensure clarity, fairness and effective enforcement in this evolving landscape. The urgency of such action is further illustrated in these cases before the judiciary as discussed in the following segment.

  • The Courts v. The Bot

A concrete example of how algorithmic systems aggravate these risks can be seen in the case of Lush v. Amazon [2014] EWHC 181 (Ch). As described above, where AI systems recombine data without recognizing legal boundaries, Amazon's automated systems effectively "recreated" the Lush mark in a misleading context. Its algorithm bid on the trademarked keyword "Lush" in Google Ads and its internal search engine automatically suggested alternative bath products under the "Lush" name, despite the platform not selling genuine Lush products. Consumers searching for genuine "Lush" products were therefore presented with a list of competing products.

Even though the use was facilitated by an algorithm rather than a human decision-maker, the Chancery Division of the High Court of Justice in England and Wales held that this practice amounted to "use of the mark in the course of trade". It reasoned that Amazon's systems were designed to deliberately attract customers by exploiting the brand's reputation. Thus, the court aligned Amazon's use of the "Lush" trademark with the statutory test for infringement, despite the absence of direct human input at every stage. This judgment showcases how courts are broadening traditional notions of "use" to capture automated and AI-driven practices. The automatic functioning of the algorithm substitutes the intentional exploitation of a mark with the incidental algorithmic association of the same. Even when no human directly endorses the infringing use, to what extent should companies be expected to predict and take pre-emptive measures to combat the complex outputs generated by AI systems. This is where tests such as "intentionality" and "course of trade" fall short.

Recently, Microsoft's Bing AI imaging tool drew Disney's attention because of the Pixar-style imaging it was generating. The issue was that Disney's logo was clearly visible in the illustrations generated. Disney asked Microsoft to prevent AI users from infringing its trademarks. This incident illustrates how the principle of independent creation, which is traditionally a strong defence in trademark disputes, is an issue in the context of generative AI. Users were able to produce images resembling Disney–Pixar logos simply by entering prompts such as "Pixar," The AI system generated output based on its training with no evidence of human intent to infringe. The resulting images nonetheless risked diluting the distinctiveness of a well-known mark. In this case Microsoft eventually introduced filters to block such prompts after receiving Disney's complaint. In the absence of clear legal guidelines, the onus lies on platforms to self-regulate. It is evident from these cases that existing trademark laws have a lacuna that needs to be addressed.

Beyond the Algorithm

The growing intersection of artificial intelligence and trademark law exposes a fundamental mismatch between existing legal principles and the realities of automated creation. Traditional trademark regimes, designed for human actors and territorially confined commerce, are lacking and fail to address systems that autonomously generate trademarks and regimes where such infringing marks can be used and disseminated across borders. This problem is intensified by the global and decentralized nature of online platforms, where jurisdictional boundaries blur, enforcement mechanisms remain inconsistent, and identifying responsible parties is often impractical. As AI becomes integral to commercial and creative activity, relying on ad hoc judicial interpretation will only deepen uncertainty. Legislative intervention is therefore essential to allocate responsibility clearly and ensure that trademark law continues to safeguard brand integrity in this rapidly evolving technological landscape.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More