- within Cannabis & Hemp topic(s)
Artificial Intelligence is rapidly influencing the legal profession and remains a hot topic for attorneys and the courts. AI has proven useful for everything from routine administrative tasks to case-analysis summaries. It helps lawyers research quickly, draft faster, and analyze large amounts of information more efficiently. However, as attorneys' use and reliance on AI increase, it raises critical concerns about accuracy, ethics, and the balance between technological efficiency and human oversight.
Manhattan Commercial Division Justice Joel M. Cohen recently discussed a major pitfall attorneys face when using AI in civil case Ader v. Ader, 87 Misc. 3d 1213(A) (Supreme Court, New York County, 2025). In Ader, the plaintiff, as executor of her late husband's estate, brought a breach of contract claim against the decedent's son for repayment of amounts paid on the loan for the son's Manhattan townhouse. After successfully arguing for summary judgment, the plaintiff moved for sanctions against the defendant pursuant to 22 NYCRR § 130-1.1 for making several misrepresentations in his opposition papers, including inaccurate citations and quotations that appeared to be "hallucinated" or filled in by an AI tool.
The plaintiff initially raised the issue on reply in the summary judgment briefing. Defense counsel, without admitting or denying the use of AI, acknowledged that multiple passages were inadvertently enclosed in quotations and were meant to be paraphrases or summarized statements. Defense counsel also assured the court that every effort would be made to prevent a recurrence of these mis-cites. However, when opposing the plaintiff's motion for sanctions, defense counsel's submission again contained another wave of fake citations and quotations, more than doubling the number of mis-cites compared to the previous filing. At oral argument, defense counsel initially acknowledged "some citation errors" but did not outright admit to using AI. He eventually conceded that he did use AI and verified the results but must have missed some errors.
Judge Cohen regretfully stated that the case added "yet another unfortunate chapter to the story of artificial intelligence misuse in the legal profession," but clarified that the "use of AI is not the problem per se." The problem occurs when "attorneys abdicate their responsibility to ensure their factual and legal representations to the Court — even if originally sourced from AI—are accurate." Judge Cohen stated that "[b]y now the risks and consequences of AI hallucinated citations should be familiar" and reiterated that reliance on the research of others, including AI, is not a valid excuse for presenting false citations.
Just weeks after the Ader decision, the Administrative Board of the Courts published its proposal to add a new Part 161 to the Rules of the Chief Administrator of the Courts regarding the use of AI in preparing court documents. Under this proposal, by signing a document submitted to the court, the attorney certifies that the document does not contain any fabricated or fictitious content generated by AI. If the court determines that this requirement has not been met, the attorney or party may face sanctions or other remedial actions.
In its memorandum supporting the proposal, the Advisory Committee on Artificial Intelligence reiterated that the proposal does not establish a new duty for attorneys but rather expands the existing obligation to include AI use. The committee directly cited 22 NYCRR §§ 130-1.1a and 130 1.1, explaining that "[t]hese existing duties and responsibilities — effectively requiring attorneys and parties to review a paper before submitting it to a court, to ensure the accuracy and reliability of all statements made therein — are manifestly applicable in the context of papers prepared with the assistance of generative AI."
The proposed rule intentionally does not require attorneys or parties to disclose to the court whether they used generative AI when submitting papers. The committee believes such disclosure is unnecessary and that demanding confirmation for documents created with AI assistance is unwarranted. However, the new rule would not prevent a judge from later asking whether AI was used to prepare the documents, especially if the court identifies potential AI-generated hallucinations.
Although the proposal only supplements the old rule requiring attorneys to review their papers for accuracy, the responsibilities would remain unchanged — counsel's duty of candor cannot be delegated. Before submitting anything to a court, practitioners must always verify the statements and citations.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.