ARTICLE
18 August 2025

The Case Against AI In Legal Work

Shaub, Ahmuty, Citrin & Spratt LLP

Contributor

Shaub, Ahmuty, Citrin & Spratt LLP, established in 1994, is a litigation-focused law firm grounded in tenacity, professionalism, and a commitment to client success. Approaching three decades of practice, the firm remains dedicated to these founding principles, offering exceptional advocacy and achieving favorable outcomes for its clients.

The firm pledges meticulous attention to detail, in-depth understanding of case facts, relevant science, and the litigation landscape, ensuring clients receive the best possible representation. Prioritizing responsiveness and trust, its team is known for being relentless, direct, and highly skilled in navigating complex legal matters.

With an outstanding reputation among peers, adversaries, and clients, Shaub, Ahmuty, Citrin & Spratt LLP prides itself on fostering trust, delivering results, and cultivating legal talent. Committed to excellence and legacy-building, the firm continues to uphold its values with purpose and conviction.

Artificial Intelligence (AI) is all around us and has become an integral part of our lives in ways we do not and may never realize. Even in the practice of law, AI can enhance legal research...
United States Technology

Artificial Intelligence (AI) is all around us and has become an integral part of our lives in ways we do not and may never realize. Even in the practice of law, AI can enhance legal research (subject to personal verification) and automate routine tasks. However, the legal profession has not yet reached the point where it can formally integrate AI into practice beyond administrative support without significantly compromising the quality and accuracy of our work. As we have all seen so far, attorneys testing the AI waters have done so with disastrous results in many cases.

The Limits of AI in Legal Reasoning

Practically speaking, lawyers review, analyze, and think — drawing on their education, training, experience, and even intuition. AI does none of that; if it could, we would all have to find other careers. Instead, large language model generative AI simply digests a vast amount of information and then predicts what a human most likely would say in response to questions. Because it's limited by how the user poses their question, it often fabricates answers by generating plausible-sounding falsehoods, commonly known as "hallucinations." AI models may also rely exclusively on public domain data, rather than licensed databases such as Westlaw or Lexis. Therefore, it may lack access to an up-to-date directory of the very sources that serve as the foundation of the American common law legal system.

Therefore, relying on that output can have serious negative repercussions for both the lawyers and their clients individually and for the profession as a whole. Moreover, there is no way to verify what information AI reviewed in reaching its result. I'm sure we've all heard the horror story of the attorney who filed a brief containing completely made-up authorities. Not good. Furthermore, as courts continue to grapple with the issue, they are now requiring attorneys to disclose if they used AI in a legal filing.

Ethical and Privacy Risks

Additionally, AI operates without the same ethical constraints that govern attorneys. Besides triggering various provisions of the Code of Professional Responsibility, using AI in one's legal practice raises equal concerns related to state and federal privacy laws, especially in the context of our favorite acronym: HIPAA. For example, running medical records through an AI program for a summary could constitute a HIPAA violation given that open AI systems, for instance, retain all data entered into the system. At bottom, leaning on AI in legal practice implicates issues concerning confidentiality, lawyer competency, billing and disclosure to both courts and clients that can be avoided by simply avoiding AI.

A Cautious Path Forward

Until such time as a workable policy is implemented within a more fully developed regulatory landscape, at SACS, we believe it is best to refrain from using AI tools in connection with legal work product. This policy reflects our commitment to ethical practice, legal precision, and the protection of sensitive information until AI technology is sufficiently reliable and regulated.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More