ARTICLE
16 March 2026

Client Alert: Court Signals Privilege Risks From Use Of Generative AI

CL
Cowan Liebowitz & Latman PC

Contributor

Cowan, Liebowitz & Latman is a leading intellectual property law and litigation firm, with worldwide recognition, providing top-notch, practical, and cost-effective service.  It also represents clients in advertising, media & technology; customs, international cargo & regulatory compliance; corporate & commercial law; real estate law; trusts & estates; and military law.
A decision of the U.S. District Court for the Southern District of New York highlights an emerging risk in the era of ChatGPT and Claude -- Using a public generative AI tool to analyze legal issues...
United States Technology
Cowan Liebowitz & Latman PC are most popular:
  • with readers working within the Media & Information industries

A decision of the U.S. District Court for the Southern District of New York highlights an emerging risk in the era of ChatGPT and Claude -- Using a public generative AI tool to analyze legal issues or develop legal strategy can jeopardize the attorney‑client privilege normally applicable to a client's communications to an attorney and the work‑product protection normally applicable to an attorney's preparation in anticipation of litigation. United States v. Heppner, Case No. 25-cr-503 (S.D.N.Y. February 17, 2026).

Courts may deem information entered into a widely available AI platform to have been disclosed to a third party, particularly where the platform's terms allow retention or review of user inputs.

This ruling is an important reminder that, absent careful controls, AI use can undermine crucial privilege protections. Clients and counsel should proceed with caution regarding public AI tools.

Judge Rakoff's Decision: AI Use Defeats Privilege Claim

On February 17, 2026, the Southern District's Judge Rakoff issued a memorandum opinion addressing whether a defendant's – not his attorney's – communications with the generative AI platform Claude were protected by attorney‑client privilege or the work‑product doctrine. The Court concluded that they were not protected.

The defendant used the publicly available AI platform Claude to generate analyses of facts, legal issues, and potential defenses during a criminal investigation about him, and he later shared the reports with his attorneys. When the government sought access to those materials after seeing them listed on a privilege log, the defendant asserted privilege.

The Court first rejected the claim of attorney-client privilege, holding that communications with a public AI platform are not communications with counsel, nor are they confidential.

Of particular significance, the Court emphasized that the platform's privacy policy permits the AI provider to collect, retain, and review user inputs and to disclose them to third parties, including government authorities. In the Court's view, these terms defeated any reasonable expectation of confidentiality.

The Court also found that work‑product protection did not apply. Although the materials may have been prepared in anticipation of litigation, they were not prepared by or at the direction of counsel and did not reflect counsel's mental impressions or strategy at the time they were created. The fact that the materials were later shared with attorneys did not retroactively render them protected.

This decision appears to be among the first to hold that inputting confidential information into a public generative AI platform can result in a loss of privilege, even where the materials are ultimately intended for use in obtaining legal advice. Here, the outcome was driven by the facts of the case: the defendant used a publicly available AI tool on his own initiative to generate strategy material without any direction from counsel.

While this decision did not address circumstances in which an attorney uses an AI tool to assist in legal representation, or directs or supervises a client's use for the same purpose, it underscores the need for clients and counsel to proceed carefully when using generative AI in connection with legal matters.

A Different, More Protective Ruling

On February 10, 2026, the U.S. District Court for the Eastern District of Michigan issued an order taking a more protective view of work‑product doctrine in the context of alleged AI use. Warner v. Gilbarco, Case No. 2:24-cv-12333 (E.D. Mich. December 4, 2025). The Court rejected efforts to obtain discovery concerning a litigant's use of third‑party AI tools in connection with that lawsuit.

In Gilbarco, defendants sought an order compelling production of "all documents and information" concerning the plaintiff's use of AI tools. They argued that any privilege or work‑product objections to "AI materials" should be overruled (or, at minimum, that the plaintiff should be forced to log the materials).

The Court denied defendants' request, concluding the information sought was not discoverable and, in any event, was not relevant or proportional.

The Court emphasized that waiver of work-product "has to be waiver to an adversary or in a way likely to get in an adversary's hand," but "ChatGPT (and other generative AI programs) are tools, not persons." The Court went on to say that defendants' motion would have the Court reveal plaintiff's "internal analysis and mental impressions," which are not discoverable as a matter of law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More