ARTICLE
7 November 2025

BRAINS v BOTS

M
Murgitroyd

Contributor

Murgitroyd is a full-service intellectual property firm with a commanding presence across Europe, the Americas and Asia.

Our European patent, trade mark and design attorneys delve into industry-specific knowledge to empower and elevate the world's most distinguished innovators and brands.

Our story is one of strategic growth and unwavering dedication to the field of intellectual property. Our journey began in Glasgow in 1975 with solid organic growth built on a foundation of knowledge, expertise, and passion for innovation. Over the years, we've been proactive in our approach to acquisitions, enabling us to curate an exceptional blend of talent and experience across all corners of the globe.

The introduction of artificial intelligence (AI) into the daily operations of IP practices, and law firms in general, offers paradigm-shifting and transformational potential.
United Kingdom Technology
Murgitroyd are most popular:
  • within Technology, Transport, Food, Drugs, Healthcare and Life Sciences topic(s)
  • in United States
  • with readers working within the Automotive industries

There are new threats from AI that IP professionals need to know about, says Alain Godement

The introduction of artificial intelligence (AI) into the daily operations of IP practices, and law firms in general, offers paradigm-shifting and transformational potential. AI could improve efficiency, decision-making ability and accuracy. A recent poll has shown that 64% of law firms already use AI software. It has the potential to entirely transform the profession and its practice through substantial cost reductions and time savings, thereby allowing fee earners to focus more of their attention on strategic management and decision-making, ultimately to the benefit of their clients.

That said, the technology is still in its infancy, and as adoption increases throughout the profession, firms are facing myriad new and emerging threats associated with AI's use. These threats are compounded by the increasing technicality and sophistication of systems, world geopolitics, and the ever-increasing rise of online criminality.

CITMA's AI Task Force was initiated to provide strategic oversight on the implications of AI and emerging technologies for the UK profession. It was tasked with identifying the risks and opportunities of AI, while providing actionable recommendations, to ensure the profession navigates these changes effectively and responsibly. You can read the Task Force's latest research report at citma.org.uk/aireport.

Emerging threats – and how firms can mitigate risk

To understand why some of these threats exist, we have to understand how AI functions at a fundamental level. AI works by learning patterns from large amounts of data, effectively artificially simulating human cognition, albeit in a limited and specific way. In contrast to traditional software, its operation is not based on 1s and 0s, or open or closed gates, and it does not follow step-by-step instructions. Instead, AI ingests large amounts of data, called 'training data', and will attempt to make connections between seemingly unrelated data sets in order to make predictions or recommendations based on what it has learned. This is commonly referred to as 'machine learning', and it is the cornerstone of all AI software.

"There have been numerous examples of solicitors and barristers relying on generative AI for submissions without checking them for accuracy"

What this means is that the quality, accuracy and security of the training data are paramount. Inaccurate, untrustworthy or compromised training data will seriously hinder the ability of AI to output actionable content. The old coding adage therefore still holds true in the age of AI: garbage in, garbage out.

Data poisoning

Data poisoning is a type of cyber-attack in which a malicious actor feeds misleading or corrupt data to the AI with the aim of influencing future outputs. This can be achieved in a number of ways, such as injecting false or misleading information into the data set, modifying it or even deleting relevant portions of it. This inevitably influences the AI software's ability to render accurate information through the introduction of biases, errors and vulnerabilities.

Think of a generative AI tool that a Trade Mark Attorney could use to generate submissions in an opposition. As well we know, submissions in many oppositions can be repetitive and will generally follow a very similar pattern, which one might find tiresome to reproduce endlessly: compare the marks, compare the goods and services, and make arguments as to the resulting likelihood of confusion and/or association.

It may be tempting simply to input details of the issues and ask the AI software to output submissions. Let's now assume that a malicious third party has subtly compromised the AI software's training data set of past cases by modifying the reasoning within a judge's written decision, or even by introducing entirely made-up precedent. The degradation of the output could be imperceptible, and the output could therefore be taken at face value if not critically reviewed. Large language models have the innate ability to sound very convincing. This is not a fluke; it is by design.

While it would be obvious to most practitioners if, all of a sudden, Sabel v Puma were to deny the need for a global assessment, a compromised data set could influence reasoning in more subtle ways. With enough repetition, false or misleading information could train the software to make mistaken or illogical arguments in comparing marks or the descriptions of goods/services, because it has relied on an improper set of previous facts.

1701576 a.jpg

There have been numerous examples (see here and here), including within the UK legal profession, of solicitors and barristers relying on generative AI for submissions without checking them for accuracy, with catastrophic results. As recently as March 2025, in a UK IPO appeal decision before the Appointed Person, skeleton arguments submitted by both parties were criticised by the Appointed Person for containing references to past decisions that either did not exist or were not relevant to the issues at hand (BL O/0559/25, 28th March 2025). The lay party in the proceedings admitted to having used ChatGPT. The Appointed Person's comments were, to say the least, scathing. While these instances were chiefly the result of AI hallucinations left unchallenged by the submitting party, the parallel is clear. Data poisoning has effectively the same result and could be seen as a form of maliciously induced hallucination.

One of the CITMA AI Task Force's recommendations is not only to assess critically the quality and currency of the data set being used, but also to ensure that there is always a human in the loop to review the output of AI software, to ensure that it is accurate. AI should be seen as a tool for enhancing or supplementing Trade Mark Attorneys in their day-to-day work, not replacing them.

As a result of the fast-paced evolution of AI, there are a number of other emerging threats that firms and legal professionals should be aware of:

  • The use of generative AI as a tool for IP infringement – Malicious third parties have already begun using generative AI tools to create counterfeits. Deepfakes, the 'cloning' of a celebrity's or public personality's face, voice or likeness into a misleading image or video for malicious purposes, are the sharp end of this trend.
  • Biased and discriminatory outputs – AI that has been trained on biased data may tend to perpetuate or, at worst, amplify bias and discrimination in legal research, its predictions on a case or client screening. The quality, accuracy and impartiality of training data is a key consideration in the selection of an AI tool.
  • Loss of client confidentiality – The training data used for AI software may inadvertently contain privileged or confidential client information, meaning it could become publicly accessible if an AI tool were prompted to reveal it. It will be particularly important for firms and software developers to have robust internal processes in place to prevent this.
  • Proliferation of low-quality 'AI' tools – As the need and want for innovation by firms grow, so too does the offer of tools that are marketed as 'Powered by AI' and are often ChatGPT wrappers. These may be developed by teams with no specific legal training or understanding of a profession's regulatory obligations.
  • Use of AI tools that may have affiliations with state actors – AI companies in countries with less-than-ideal democratic or free-speech track records may experience governmental authorities exercising undue influence over the output of their large language models (LLMs), in order to reflect a specific political ideology and worldview. There are also concerns in relation to whether any technology based on such LLMs can truly guarantee safety and confidentiality, among other concerns. Firms and developers should exercise extreme caution in relation to these AI tools.

1701576 b.jpg

Malicious actors exploit GitHub by faking the popularity of code packages

'Starjacking' and GitHub

In the world of coding, GitHub is the Babylon of code repositories. In broad terms, it is effectively an online platform where people can store and share computer code and collaborate on open-source programs. It is particularly useful when attempting to troubleshoot coding issues by having a worldwide community of coders who can contribute.

The code can be bundled into packages and users can assign these star ratings, based on their popularity, which software developers use to assess whether a code package is trustworthy and whether the code can be integrated into their own software project. There is no point spending time and money re-inventing the coding wheel when existing code has been vetted by hundreds of thousands of users and all one needs to do is tweak it slightly so that it meets one's inpidual needs (similar to using a well-drafted letter before action as a template for future letters, to be adjusted as required).

Therein lies the crux of the issue, which affects not only GitHub, but also any online platform that allows 'users' to vote, like, rate or otherwise assign a value to a post or product. The validity and fidelity of these votes is ultimately contingent on the platform's ability to effectively police them, or, in GitHub's case, on package managers' competence to validate repository links. Instagram has famously had issues with fake 'followers',with some celebrity users coming under scrutiny for allegedly buying farms of followers to boost their account visibility. On GitHub, stars can be bought, or savvier malicious attackers will create or hijack a package and link it to a repository that appears to be popular. Irrespective of the mechanism of action, the effect is the same: users are misled into believing a package is trustworthy. They download it, then integrate it into their software project.

"AI should be seen as a tool for enhancing or supplementing Trade Mark Attorneys in their day‑to‑day work, not replacing them"

It isn't hard to see how this could have catastrophic consequences if it were to affect AI software for legal professionals. Examples include the covert leaking of sensitive client information, compromised software that installs malware, AI output manipulation – the list is endless, but in each case the result is significant financial and reputational harm for the legal institution involved, including possible violations of the IPReg Code of Conduct or the General Data Protection Regulation.

While this particular mechanism has since been patched, a new exploit will doubtless be created. It is therefore imperative that any open-source code be carefully vetted and reviewed, and developers must not solely rely on star ratings.

Further threats

The threats listed in the box above are some of the more recent ones to emerge that may adversely affect the implementation and use of AI software in the legal profession. Enumerating them all would largely be a fruitless endeavour, as the paradigm in the digital environment, in particular in the AI-enabled one, is that of an arms race. With every new functionality comes, inevitably, a new vulnerability and its fix through the cycle of patches and upgrades. The growing sophistication of cyber-attacks, such as the ones having recently affected Marks & Spencer and the Co-operative Group to critical effect (around £440m in damages), is a reminder that the determination and sophistry of malicious actors cannot be underestimated. While many malicious attacks have their roots in criminal organisations, these can often be acting on behalf, in collaboration with or at the behest of foreign state actors as part of a coordinated cyber-attack campaign.

1701576 c.jpg

High-profile cyber‑attacks are a reminder of the need for caution around new technologies

AI offers the possibility of transformative opportunities for legal professionals through unmatched increases in efficiency and accuracy, to the benefit of enhanced client service. That fact notwithstanding, its integration into the daily operations of practices has the potential to expose them to increased scrutiny, whether from clients, regulatory bodies such as IPReg, the Information Commissioner's Office, the courts or even the Crown Prosecution Service.

It will therefore be of utmost importance for firms to constantly retain a cautious, informed approach to this new technology, balancing the need and desire for innovation with the fundamental requirement for vigilance and security. In parallel, it will be the responsibility of inpidual practitioners to ensure that they have the ability to upskill, at pace, in order to stay abreast of the latest capabilities and limitations that AI introduces, while learning to use AI-powered tools safely, accurately and responsibly. Only with proactive management of the risks can practices safely navigate the potential pitfalls of AI, while harnessing its power and ensuring the safeguarding of an essential characteristic of our profession: client trust.

Originally published by CITMA.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More