ARTICLE
5 March 2026

Negating Attorney-Client Privilege: When AI Conversations Are A Privilege Bomb

Wa
Ward and Smith, P.A.

Contributor

Ward and Smith, P.A. is the successor to a practice founded in 1895.  Our core values of client satisfaction, reliability, responsiveness, and teamwork are the standards that define who we are as a law firm.  We are an established legal network with offices located in Asheville, Greenville, New Bern, Raleigh, and Wilmington. 
Bottom line: If you or anyone at your company pastes legal advice, investigation materials, or other sensitive information into one of these tools...
United States Technology

A New York federal court just issued the first ruling to tackle head-on whether conversations with a public AI chatbot can be protected by attorney–client privilege or the work product doctrine. The short answer: they can't.

Bottom line: If you or anyone at your company pastes legal advice, investigation materials, or other sensitive information into one of these tools, you could be handing that information to your adversaries on a silver platter.

Boom Goes the Privilege

Here's what happened. A target of a federal investigation used a public AI platform to create strategy-focused "reports" about the facts and law of his case. Federal agents later seized electronic devices containing those AI exchanges during a search.

The defendant claimed privilege and work product protection, arguing that because he eventually shared the AI outputs with his lawyers (and had fed in information he'd gotten from his lawyers), the materials should be shielded. The court said no, for three straightforward reasons:

  • The AI isn't your lawyer. There was no attorney–client relationship between the defendant and the AI platform. Privilege protects confidential communications with your attorney. Chatbots don't qualify.
  • There was no real expectation of confidentiality. The AI provider's terms of service allowed the company to collect user inputs and outputs, use them to train its models, and even disclose them to third parties, including regulators. Under those conditions, nobody can reasonably claim they expected the conversation to stay private.
  • The chats weren't about getting legal advice from counsel. The defendant initiated these conversations on his own. The AI tool itself disclaimed giving legal advice. That's a far cry from the kind of attorney-directed communication privilege is designed to protect.

The court also shut down the work product argument. Work product protection is meant to shield a lawyer's thinking and strategy. These documents were created by the client, on his own, using a public tool, not prepared by or at the direction of counsel.

The court went on to make clear that even information that starts out privileged loses its protection once it is pasted into a public chatbot.

The Privilege Bomb Blast Radius is Bigger Than You Think

You don't need to face a criminal case for this ruling to hit home.

While the court's findings in the opinion were narrowly tied to the facts of the case, the court's rationale and logic for applying long-standing legal principles of privilege to AI naturally extends to other common legal scenarios. Litigation discovery rules, regulatory investigations, enforcement actions, and internal investigations all depend on keeping certain information confidential and privileged.

Attorney-Client Privilege Doesn't Just Happen

There must be an actual attorney-client relationship; the communication must be made for the purpose of obtaining legal advice; and the parties must maintain a reasonable expectation of confidentiality for attorney-client privilege to apply.

Typing a question into a public chatbot or Generative AI tool checks none of those boxes, no matter how legal the subject matter feels.

Under the court's framework, every AI chat about a legal issue that happens outside the attorney-client relationship is a potential exhibit waiting to be produced.

And employees are having these conversations with chatbots every day, asking how to handle a harassment complaint, pressure-testing a patent claim, or gut-checking whether a vendor deal raises compliance issues. None of these conversations involve an attorney, none carry a reasonable expectation of confidentiality, and under the court's reasoning, all of them could end up as exhibits in a lawsuit, a regulatory proceeding, or worse.

Attorney-Client Privilege is Not Permanent

Privilege protection is not a permanent label that follows information wherever it goes. The moment privileged content is shared with a public AI tool, that act of that sharing constitutes a waiver of privilege, making the information fully discoverable by adversaries, regulators, and opposing parties.

The court's waiver analysis leads to an uncomfortable conclusion: any person who copies privileged material into a public AI tool, whether to summarize, brainstorm, or reorganize, is stripping that material of its protection in real time.

It doesn't matter whether the content is an investigation report, a legal memo, or a lawyer's negotiation strategy. If it goes into a chatbot whose terms allow the provider to access, train on, or disclose user data, the privilege may already be gone by the time the user hits 'enter.' In each case, the user may believe the interaction is harmless, but under this ruling, those exchanges could be fair game in litigation, a regulatory review, or a government investigation.

Free or Paid – the Attorney-Client Privilege Problem is the Same

In this case, the court zeroed in on the AI provider's specific privacy policy, which allowed it to collect user inputs and outputs and use that data to train its models and disclose it to third parties. The court's reasoning isn't limited to free or open-access tools.

Following its logic, any AI platform, whether free, paid, or commercially licensed, could present the same problem if its terms of service reserve the right to review, train on, or disclose user data. That means paying for a premium subscription or even a corporate license doesn't automatically fix the confidentiality issue.

The Growing AI Discovery Risk

Going forward, expect opposing parties and regulators to ask pointed questions about AI usage in depositions, custodian interviews, and subpoena negotiations. Being asked during a deposition "Did you use any AI tools to prepare for this deposition or to analyze documents related to this matter?" or receiving a subpoena that specifically demands "all communications with AI-based tools, including prompts, inputs, and outputs, related to [topic]" are no longer hypothetical scenarios; they are the natural next step after this ruling.

Three Ways Businesses Can Diffuse the Attorney-Client Privilege Bomb

First, Set Clear Rules and Explain Why They Matter. Create or update a straightforward rule: no one uses public, consumer-grade AI tools for anything related to legal advice, attorney-provided materials, investigations, audits, disputes, or trade secrets. For example: "Employees may not input, paste, upload, or otherwise transmit any legal advice, attorney communications, investigation materials, draft pleadings, audit findings, or trade secrets into any AI tool that is not expressly approved by the Legal Department." A clear, concrete rule is far more effective than a vague directive to "use caution."

Second, Build Safe AI Channels and Put Lawyers in the Driver's Seat. Deploy an enterprise AI solution that contractually and technically prevents model training on your data, blocks provider access and disclosure, and keeps all interactions in your controlled environment. Review and negotiate your AI vendor agreements to ensure they include robust confidentiality, data segregation, and audit rights.

But don't stop at the technology. The court hinted that the outcome might have been different if a lawyer had directed the defendant's use of AI. That means attorney direction isn't just a best practice, it may be the key to preserving privilege. Require that any AI-assisted work involving legal content happen only under counsel's direction, within a documented workflow built for privileged communications.

Third, Train Your Teams. The risk this case exposed isn't just about what AI produces—it's also about what employees put in AI. Every prompt, every uploaded document, every pasted paragraph is a potential disclosure to a third party. Build a "pause before you paste" culture: before anyone touches a chatbot, they should ask whether what they're about to type or upload is privileged, confidential, or sensitive. If the answer is yes—or even maybe—stop and consult legal first.

Make AI governance a part of your regular compliance training so that the rules are reinforced, not just read once during onboarding. Run periodic tabletop exercises of short, scenario-based sessions where teams walk through realistic situations drawn from this ruling's facts.

The Fuse is Lit, Now What?

This ruling didn't break new legal ground; it applied longstanding privilege principles to a new technology and reached the conclusion most lawyers would have predicted. But that's exactly what makes it so important. The court confirmed that public AI tools are third parties, that sharing information with them can waive privilege just as easily as forwarding a confidential email to a stranger, and that no amount of after-the-fact attorney involvement can undo the damage.

Until more courts weigh in, the playbook is clear: set enforceable rules that explain the consequences, build safe AI channels with real confidentiality protections and lawyers in the driver's seat, instill a "pause before you paste" culture, and train your teams so that no one ends up like the defendant in this case, sitting on a pile of AI chat transcripts that just became the other side's best evidence.

Companies that act now can keep reaping AI's benefits. Those that don't are gambling that their employees will never type the wrong thing into the wrong tool and this ruling shows exactly how that bet plays out.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More