ARTICLE
27 December 2025

AI In The Courtroom: Lessons From Recent Cases And Regulatory Shifts

B
Barry Nilsson

Contributor

For 60 years, Barry Nilsson has been shaping a better legal experience, putting our clients first - where they belong. We have grown to become an award-winning national law firm of more than 500 staff, working alongside our clients and evolving our services to meet their changing needs.
AI continues its rapid transformation of the Australian legal industry, with legal AI adoption at an all-time high across the profession.
Australia Law Practice Management
Will Stidston’s articles from Barry Nilsson are most popular:
  • within Law Practice Management topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in Australia
  • with readers working within the Healthcare and Law Firm industries

Recent cases and court guidelines on artificial intelligence (AI) highlight the growing tension between innovation and professional responsibility in legal practice, demonstrating how misuse of AI can result in serious disciplinary consequences and ethical challenges.

Courts respond to further misuse of AI

AI continues its rapid transformation of the Australian legal industry, with legal AI adoption at an all-time high across the profession. Against the backdrop of widespread use, two recent decisions illustrate the Australian courts firm stance on unverified AI-generated content.

Re Walker [2025] VSC 714

In this Victorian Supreme Court decision, it emerged in the course of a hearing that the defendant's solicitor had used AI whilst preparing part of her client's opening submissions. The solicitor's use of AI was contrary to guidelines published by the Court in relation to the use of AI tools in proceedings and resulted in reliance on non-existent or 'hallucinated' citations. These guidelines were published by the Court in May 2024 to assist both legal practitioners and self-represented litigants in response to the growing use of generative AI in legal practice.

Justice Moore discovered the issue while reading the defendant's opening submissions prior to trial and noted four authorities which his Honour was unfamiliar with and which chambers were unable to locate. After making enquiries, including of the defendant's solicitor, Justice Moore expressed concern that submissions had been filed with the Court containing authorities which did not exist.

After hearing submissions from the defendant's solicitor, Justice Moore concluded that it was unacceptable for AI to be used by solicitors or barristers in the production of court documents, unless the product of that use is independently and thoroughly verified. His Honour was then persuaded to determine an appropriate sanction in accordance with the Court's inherent jurisdiction and imposed a reprimand on the solicitor for her conduct.

Mertz & Mertz (No 3) [2025] FedCFamC1A 222

In this Full Court of the Federal Circuit and Family Court of Australia (Division 1) decision, the appellant initially filed a Summary of Argument and List of Authorities that included fictitious citations generated by AI. The document was superseded by an amended version which removed the fictitious authorities. However, the amendments were not identified and both the original and amended documents bore the names and contact details for King's Counsel and Counsel.

After enquiries made by the Court, the solicitor conceded that AI had been used in the preparation of the originally filed Summary of Argument and List of Authorities. While she denied using AI herself, she conceded that a paralegal had used AI to prepare the original documents without her knowledge.

The Court rejected the excuse. In so doing, it reiterated that practitioners remain accountable for accuracy, regardless of whether they delegate tasks.

The matter was treated seriously, with referrals made to the South Australian Legal Profession Conduct Commissioner and the Victorian Legal Services Board and Commissioner.

Updated ChatGPT guidelines

Beyond the courtroom, regulators and technology providers are also tightening standards.

On 29 October 2025, OpenAI updated ChatGPT's usage policies to include terms prohibiting the use of ChatGPT for the '...provision of tailored advice that requires a licence, such as legal or medical advice, without appropriate involvement by a licensed professional.'

While the practical impact remains uncertain, the updated terms reiterate the need for the exercise of professional oversight when using such a platform.

Key takeaways

The cases of Re Walker and Mertz identify the risks of a court exercising its inherent jurisdiction to discipline practitioners for failing to comply with professional responsibilities via the use of AI. They also reiterate the responsibility of practitioners to ensure the accuracy of work when prepared by an act of delegation.

Recent updates to ChatGPT's usage policies reinforce this message. OpenAI now expressly prohibits the use of its platform for providing tailored advice that requires a licence without appropriate involvement by a licensed professional. This update signals a growing expectation that AI tools should complement, not replace, professional judgment.

For legal practitioners, these developments reinforce core duties of competence, diligence, and honesty for practitioners leveraging AI in legal practice. To remain compliant, practitioners must:

  • verify all AI-generated content before submission
  • avoid inputting confidential or sensitive information into public AI tools
  • disclose AI use when appropriate and maintain transparency with clients and courts, and
  • stay informed about evolving guidelines and technological risks.

Re Walker [2025] VSC 714

Mertz & Mertz (No 3) [2025] FedCFamC1A 222

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More