Courts are increasingly experiencing AI, reflecting a growing trend and raising important questions about accuracy, ethics and professional conduct.
1) Irish Courts: "all the hallmarks" of an AI tool
In Reddan v an Bord Pleanála [2025] IEHC 172, the High Court (Court) refused a lay litigant's application for leave to apply for judicial review, emphasising that his allegations fell short of the 'substantial grounds' threshold required for leave applications.
The applicant had sought judicial review of An Bord Pleanála's decision to grant planning permission to Nenagh Golf Club. The Court commented on one particular element of the applicant's Statement of Grounds, where the applicant described a statement by an architect involved in the matter as "subordination to perjury". It was noted that this was not a legal phrase recognised in Irish jurisprudence but likely originated from Scottish or American law. When challenged on this issue, the applicant conceded that it was a principle he discovered while conducting online legal research. The Court observed that it seemed to derive from an artificial intelligence source and had 'all the hallmarks of Chat GPT, or some similar AI tool'.
This judgment underpins a growing and potentially dangerous trend emerging across the legal industry in different jurisdictions, where lay litigants are being penalised for having false or inaccurate AI-generated legal research. Inaccurate output from AI tools, referred to as 'hallucinations', occurs because AI models generate responses based on statistical patterns derived from extensive datasets. Certain AI tools do not verify the accuracy of output data (most notably, publicly available AI tools), resulting in a significant portion of the output being inappropriate, inaccurate, biased or fabricated.
2) US Courts: lawyers have a gatekeeper role to ensure accuracy
This trend was first observed in the US and, more recently, in February 2025. Lawyers representing a plaintiff in a dispute against Walmart were sanctioned for citing AI-generated cases in an application for a pre-trial motion. Out of the nine cases cited, eight were non-existent. The lawyers admitted that references to such cases had been generated using a law firm's internal AI system. The US court determined that the lawyers had an ethical obligation to verify the authenticity of the cases cited. Consequently, the court fined the lawyers and removed one of them from the lawsuit.
This case comes nearly two years after the widely publicised US District Court decision in a personal injuries claim, Mata v Avianca Case No. 22-cv-1461 (PKC) (S.D.N.Y). Lawyers for the applicant submitted opposition papers in response to a motion to dismiss the case. They admitted that Chat GPT had been used to identify rulings favourable to the applicant's position and had fabricated 'decisions' based on this. Against this background, the court found the lawyers acted with 'subjective bad faith' sufficient for sanctions under the Federal Rule of Civil Procedure and stated:
Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. ... But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.
In another recent case, a panel of judges in New York State Supreme Court Appellate Division's First Judicial Department encountered a litigant who relied on an AI-generated avatar as his lawyer. The judges disapproved, in particular, of the litigant's lack of candour in informing the court that the lawyer who would be arguing the case via video was not real.
4) UK Courts: use of fabricated case law
In the UK, the court decision in Frederick Ayinde -v- The London Borough of Haringey [2025] EWHC 1040 (Admin) considered fabricated case citations in legal submissions. On discovery that the claimant's case had been presented relying on several unidentifiable legal authorities, the claimant classed the issue as a 'minor, cosmetic error' that could be easily explained. The court rejected this argument, finding the citations to be substantive fakes with no explanation provided by the claimant. Although it was not confirmed that AI was used to generate these cases, the court found that providing a fake description of five forged cases in submissions was improper, unreasonable and negligent, and it was sufficient to amount to professional misconduct. The court emphasised that it is the responsibility of a legal team to ensure the accuracy of legal submissions and made an order for wasted legal costs against the claimant, highlighting the potential for financial and reputational impact on legal practitioners.
Key takeaways
When used correctly, the development and adoption of AI tools can be incredibly beneficial for clients and their lawyers. However, these cases highlight the significant risk to litigants relying solely on AI tools to draft legal submissions and research. These cases exemplify legal practitioners' need for proper oversight to ensure accuracy when using AI tools. Failing to have such oversight can lead to case losses, penalties and cost consequences, and reputational damage for practitioners.
The use of AI in litigation is discussed at length in our Litigation Trends 2025. Access the link to the publication here.
For more information on digital solutions available from William Fry, see here.
Contributed by Caitlin Devitt
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.