This Judgment comes as a stark and clear warning to legal practitioners of their duties and responsibilities in this age of AI which in the words of Judge Sharp creates "risks as well as opportunities."
The Facts
The two cases had been referred and listed under the Divisional Court's Hamid jurisdiction, following the actual or suspected use of generative AI platforms by lawyers to create materials to be put before the court, without verifying the authenticity and accuracy of the information provided (typically a fake citation or quotation).
In the Ayinde case (Ayinde v Haringey & Al-Haroun v Qatar [2025] EWHC 1383), as part of their judicial review case, the barrister representing Mr. Ayinde, Ms. Forey, misstated Section 188(3) of the Housing Act 1996 and cited inexistent cases to substantiate her grounds. As for the citation of one of the cases mentioned therein, it was the citation reference to a different case which concerned a charity's liability to pay business rates, which was completely irrelevant to the judicial review claim. The defendant made an application for wasted costs order against the applicant and Ms. Forey, on the basis that she had cited five fake cases, failed to disclose copies of the cases when so requested and misstated section 188(3) of the Housing Act.
In the case of Al-Haroun, the court's judicial assistants reviewed 45 citations and discovered that 18 cases were non-existent, while many of the real cases cited contained none of the quotations attributed to them. These cases were put before the court by Mr. Haroun's solicitor. Dias J, on 9 May 2025, said:
"Putting before the court supposed "authorities" which do not in fact exist, or which are not authority for the propositions relied upon is prima facie only explicable as either a conscious attempt to mislead or an unacceptable failure to exercise reasonable diligence to verify the material relied upon."
The judicial assistant, damningly concluded: "The vast majority of the authorities are made up or misunderstood."
Such conducts, as the High Court highlights, is likely to be in breach of the following duties of barristers under the English Bar Standards Board (BSB) Handbook:
- the duty of barristers to observe their duty to the court in the administration of justice (CD1);
- the duty to act with honesty and integrity (CD3);
- the duty to not act in a way which is likely to diminish the trust and confidence which the public places in barristers or the profession (CD 5);
- the duty to provide a competent standard of work to each client (CD 7); and
- the duty to not knowingly or recklessly mislead or attempt to mislead the court or anyone else (Rules C3.1 and C9.1).
The outcomes (in accordance with the BSB handbook) which compliance with these Core Duties are designed to achieve include:
Outcome 1 (oC1): the court is able to rely on information provided to it by those conducting litigation and by advocates who appear before it;
Outcome 2 (oC2): the proper administration of justice is served; and
Outcome 4 (oC4): those who appear before the court understand clearly their duties to the court.
The Bar Standards Board has posted, on October 2023, a blog entitled "ChatGPT in the Courts: Safely and Effectively Navigating AI in Legal Practice," ¹ highlights the following:
- "AI, while a promising tool, is not a replacement for human responsibility and oversight;"
- "A lawyer is answerable for their research, arguments, and representations under their core duties to the Court and to their client. These duties continue to hold true when utilising AI;"
- Various recommendations were made in that blog, namely that:
- Training in legaltech and AI be provided to lawyers, for instance, as part of CPD courses;
- The need to for lawyers to understand the strengths, weaknesses and the cope of application of each tool;
- The quintessential requirement for lawyers to verify, review, interpret and contextualise AI outputs to confirm correctness and revamp them to each of their clients' needs.
The Bar Council of the UK has also published a guidance in January 2024 on "Considerations when using ChatGPT and generative artificial intelligence software based on large language models" and Practice," ² made the following observations on the use of generative AI LLM systems:
- "The ability of LLMs [large language models] to generate convincing but false content raises ethical concerns. Do not therefore take such systems' outputs on trust and certainly not at face value... It matters not that the misleading of the court may have been inadvertent, as it would still be considered incompetent and grossly negligent." The need for lawyers to cross-check the outputs of AI LLM software is therefore of paramount importance;
- "Generative AI LLMs can therefore complement and augment human processes to improve efficiency but should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise which clients, courts and society expect from barristers;"
- "Be extremely vigilant not to share with a generative LLM system any legally privileged or confidential information (including trade secrets), or any personal data, as the input information provided is likely to be used to generate future outputs and could therefore be publicly shared with other users. Any such sharing of confidential information is likely to be a breach of Core Duty 6 and rule rC15.5 of the Code of Conduct, which could also result in disciplinary proceedings and/or legal liability;"
- "barristers will need to critically assess whether content generated by LLMs might violate intellectual property rights, especially third-party copyright... one should be careful not to use, in response to system prompts, words which may breach trademarks or give rise to a passing-off claim."
- "Irresponsible use of LLMs can lead to harsh and embarrassing consequences, including claims for professional negligence, breach of contract, breach of confidence, defamation, data protection infringements, infringement of IP rights (including passing off claims), and damage to reputation; as well as breaches of professional rules and duties, leading to disciplinary action and sanctions."
The Court's Response
Whilst a finding of contempt of court was contemplated, the High Court referred the lawyers to their respective regulatory bodies and gave a bold warning to the legal profession:
"our overarching concern is to ensure that lawyers clearly understand the consequences (if they did not before) of using artificial intelligence for legal research without checking that research by reference to authoritative sources. This court's decision not to initiate contempt proceedings...is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction."
Potential Sanctions against Legal Practitioners
The use of AI-generated materials in court can lead to serious consequences for legal practitioners. The potential sanctions range from referral to the police for criminal investigation, contempt of court proceedings, referral to the concerned regulator (the BSB for Barristers and the Law Society for Solicitors), a strike out or a wasted costs order and public admonishment in a court judgment.
A Global Phenomenon
The judgment makes mention of a myriad of cases, across jurisdictions, demonstrating that the legal profession across the world is hostage of similar problems:
- United States: Two lawyers and the law firm were each fined $5,000 for submitting AI-generated fake citations to a New York Court, in Mata v Avianca Inc. Litigation sanctions were imposed against the Plaintiff and financial payments from the lawyers in another case.
- Australia: The Federal Circuit and Family Court referred the legal representatives to their respective regulators in a judicial review case.
- New Zealand: The Court drew attention to a guidance, issued by the judiciary on the use of AI in the courts and Tribunals.
- Canada: The lawyer (Applicant) filed a document, citing non-existent cases and Masuhara J opined that such unchecked cases "can lead to a miscarriage of justice." Cost sanctions were imposed on the lawyer.
Commentary: A Mauritian Perspective
The Mauritian Code of Ethics for Barristers ("Code of Ethics") reflects, to a great extent, the Core Duties enshrined in the British BSB Handbook:
- Rule 2.3 This Code applies to all barristers practising in
Mauritius.
A barrister shall not –
(a) engage in conduct, whether in the pursuit of his profession or otherwise,
which is –
(i) dishonest or otherwise discreditable to a barrister;
(ii) prejudicial to the administration of justice; or
(iii) likely to diminish public confidence in the legal profession or the administration of justice or otherwise bring the legal profession into disrepute; - Rule 3.10 A practising barrister has an overriding duty to the Court to ensure in the public interest that the proper and efficient administration of justice is achieved. He shall assist the Court in the administration of justice and shall not deceive or knowingly or recklessly mislead the Court.
- Rule 8.4 A barrister shall not handle a matter which he knows or ought to know he is not competent to handle, without co-operating with another barrister who is competent to handle it.
- Rule 8.5 A barrister shall not accept instructions unless he can discharge them promptly, having regard to the pressure of other work.
- Rule 10.1 A barrister shall in all his professional activities act promptly, conscientiously, diligently and with reasonable competence and shall take all reasonable and practicable steps to ensure that professional engagements are fulfilled.
AI hallucinations pose an urgent and multifaceted challenge – these errors should, and rightfully so, be attributed to the ones making use of generative AI, i.e. the legal professionals. Such unchecked use can easily erode public confidence in judicial processes, undermine the integrity of legal proceedings and expose practitioners to ethical and professional liability:
"Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work."
Pupil masters, head of chambers and managing partners should also adopt a risk-averse stance by not blindly trusting the works of their juniors or pupils.
The Code of Ethics properly outlines the behaviours expected from barristers, yet this document ought to be interpreted in accordance with the living instrument doctrine, i.e. it is not stagnant as at the time of its publication but evolves with changes. The Code of Ethics could therefore be said to provide adequate safeguards but the burden is still on practitioners to make judicious use of AI, whilst being ethically correct. Will Mauritian Courts adopt a similar approach to the High Court of England if and/or when it faces a similar situation.
There is an urgent need for the Bar Council and the Judiciary in Mauritius to publish guidelines on the use AI for the benefit of the legal profession as a whole.
At Appleby, internal policies on the use of AI are already in place. Companies are also encouraged to adopt internal policies which can guide their employees in their respective fields on the use of AI tools in their tasks.
Footnotes
1. Bar Standards Board, ChatGPT in the Court: Safely and Effectively Navigating AI in Legal Practice, Blog, 8 Oct 2023, https://www.barstandardsboard.org.uk/resources/chatgpt-in-the-courts-safely-and-effectively-navigating-ai-in-legal-practice.html
2. The Bar Council, 'Considerations when using ChatGPT and generative artificial intelligence software based on large language models', 30 January 2024, https://www.barcouncilethics.co.uk/documents/considerations-when-using-chatgpt-and-generative-ai-software-based-on-large-language-models/
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.