ARTICLE
11 June 2025

AI Use In The Legal Industry: How To Avoid A Red Face (Or Worse...)

LS
Lewis Silkin

Contributor

We have two things at our core: people – both ours and yours - and a focus on creativity, technology and innovation. Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.
In two recent judicial review cases which were referred to the High Court under its Hamid jurisdiction (Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin))...
United Kingdom Technology

In two recent judicial review cases which were referred to the High Court under its Hamid jurisdiction (Ayinde, R (On the Application Of) v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin)), the court underscored the perils of unverified reliance on generative artificial intelligence tools for legal research and case preparation.

Facts

In the first of the two cases, Frederick Ayinde -v- The London Borough of Haringey [2025] EWHC 1040 (Admin), the claimant challenged the local authority's failure to provide interim accommodation pending a statutory review of a homelessness decision. The legal submissions prepared by the barrister on behalf of the claimant included references to five non-existent cases, as well as a misstatement of the effect of the relevant statutory provision. When the defendant's solicitors raised concerns about the authenticity of the cited authorities, the claimant's legal team failed to provide satisfactory explanations or corrections which led the court to consider that the submissions (and citations therein) had been generated (and fabricated), at least in part, by AI tools. Despite protestations from counsel for the claimant that she had not used any generative AI tools to assist with her legal research, the court referred her to the Bar Standards Board for further investigation, stating:

On the material before us, there seem to be two possible scenarios. One is that Ms Forey deliberately included fake citations in her written work. That would be a clear contempt of court. The other is that she did use generative artificial intelligence tools to produce her list of cases and/or to draft parts of the grounds of claim. In that event, her denial (in a witness statement supported by a statement of truth) is untruthful. Again, that would amount to a contempt.

The second case, Hamad Al-Haroun v (1) Qatar National Bank QPSC & (2) QNB Capital LLC, concerned a substantial damages claim for alleged breaches by the defendants of a financing arrangement. The defendants filed applications to dispute the court's jurisdiction and the judge ordered that the defendants be given additional time to file and serve evidence in relation to this application. In response, the claimant applied to set aside the order and provided a witness statement, alongside a witness statement from his solicitor. Both statements included citations which either did not exist or, if they did exist, "did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application". The claimant admitted to using "publicly available artificial intelligence tools, legal search engines and online sources" to draft his witness statement and the solicitor admitted to relying on the research provided by the claimant without independent verification. Although the solicitor stated that "he had no intention to mislead the court" (a position which seems to be accepted in the judgment), his actions were categorised as a "lamentable failure to comply with the basic requirement to check the accuracy of material that is put before the court" and he was referred to the Solicitors Regulation Authority for further action.

Further thoughts

The judgment highlights the importance of maintaining professional and ethical standards in the face of emerging technologies. In particular, it underscores the golden rule of AI usage (in any context, but particularly in the legal industry): check, validate, and check again!

Indeed, the judgment makes clear that:

Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source. ... Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).

So, what happens now?

Are stricter guidelines regarding the use of AI tools in the legal industry required? In recent years, the Bar Council, the Bar Standards Board, the Solicitors Regulatory Authority, and the Courts and Tribunals Judiciary have already published guidance for legal professionals about the use of AI tools. The general content of these guidance documents has been a warning to use AI tools cautiously - for example, the judiciary guidance notes on legal research that "AI tools are a poor way of conducting research to find new information you cannot verify independently."

If stricter guidelines were introduced, how would these balance against growing calls by clients for the time and cost efficiencies which can be achieved with (careful) use of AI? Many law firms, including Lewis Silkin, are embracing the use of AI technologies, where such use is undertaken in accordance with considered usage policies and guidance. Putting additional parameters around such use feels extreme - nevertheless, the judgment in this case is a good reminder to us all to use AI as a tool to supplement, but not replace, our own expertise and judgment.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More