- within Technology and Finance and Banking topic(s)
- with readers working within the Aerospace & Defence and Banking & Credit industries
The Bottom Line
- It is critical to understand how AI providers handle user data. Legal strategy shared with an AI platform that permits data retention and training may lose privilege protections.
- Even enterprise-level AI platforms pose litigation risks. Businesses must establish clear policies and procedures for using AI tools and train employees to understand the risks posed by using AI platforms without legal guidance.
- Any use of AI tools for legal analysis should be clearly tied to attorney oversight and direction to better support attorney-client privilege and work product protections.
Generative AI tools are great for brainstorming, organizing thoughts, and sharpening ideas. But what happens when you disclose legal advice or strategy to an AI platform?
A recent federal decision delivers a clear warning: communications with generative AI platforms may not be protected by attorney‑client privilege or the work product doctrine, even when they involve legal strategy and anticipated litigation. The ruling is among the first to squarely confront how long‑standing privilege principles apply to modern AI tools and it has implications for companies and individuals across industries.
The Court’s Analysis
In the criminal case United States v. Heppner, the defendant engaged in a series of chats with the generative AI platform Claude after he received a grand jury subpoena. Heppner prompted Claude to analyze his legal exposure and discussed his anticipated strategy, including likely factual and legal arguments, in the event he was indicted. The government subsequently indicted Heppner and obtained copies of these AI chats while executing a search warrant at his home.
Generally, the government would be entitled to use such documents in prosecuting Heppner. However, Heppner argued that his chats with Claude, and Claude’s responses, were protected by attorney-client privilege because they relied on information he obtained from his lawyers in creating the chats and the chats were created for the purpose of obtaining legal advice. He also argued that his chats were protected by the work product doctrine, which can shield documents prepared in anticipation of litigation.
In what he called a “question of first impression nationwide,” Judge Jed Rakoff of the Southern District of New York overruled Heppner’s assertion of privilege over his AI chats.
Why Attorney-Client Privilege and Work Product Protection Did Not Apply
Attorney-client privilege protects communications between a client and an attorney that are intended to be confidential and are for the purpose of obtaining or providing legal advice. Judge Rakoff noted that the user agreement governing use of Claude undermines any expectation of confidentiality as it provides its operator Anthropic with the right to collect user prompts for training and other purposes, and it expressly reserves Anthropic’s right to disclose user data to third parties.
Further, because Heppner knowingly shared his communications with a third party under terms that disclaimed confidentiality, the court concluded there was no reasonable expectation that the communications would remain confidential. As a result, the AI chats were not protected by the attorney-client privilege.
The court also rejected Heppner’s argument that the chats were protected under the work product doctrine. Judge Rakoff noted that this doctrine may protect materials prepared by or at the direction of counsel, but typically it does not apply to materials prepared by a client independently.
Heppner’s lawyers conceded that they did not direct him to use Claude, thus defeating his claim for work-product protection.
Broader Implications of the Decision
Judge Rakoff’s decision demonstrates that the use of AI will still be “subject to longstanding legal principles, such as those governing the attorney-client privilege and work product doctrine,” and underscores the necessity of consulting with counsel before using AI tools for legal analysis or strategy.
Courts have traditionally held that communications shared with a third party are not confidential, and therefore not privileged, unless the third party acts as the lawyer’s or the client’s agent and is necessary to facilitate the provision of legal services. Judge Rakoff noted that Claude might arguably have functioned as the lawyer’s agent within the protection of the attorney-client privilege if Heppner’s counsel had directed its use. Similarly, the court left open the possibility that work product protection could apply to AI-generated materials created at the instruction of counsel. The Court’s analysis makes clear that any use of AI tools for legal strategy or advice must be structured and directed by counsel to be protected from discovery.
The decision also highlights the importance of understanding how AI providers handle user data and access. Heppner lost privilege protections in part because he used the free version of Claude, which is governed by terms of use that expressly disclaim privacy protections. Enterprise-level AI platforms generally offer enhanced confidentiality protections including contractual limits on data use, restricted access, segregation of customer data, and deployment within secure environments. While these features do not automatically establish privilege, they may strengthen arguments that communications were intended to remain confidential.
Even so, it is important to exercise caution within the closed environment offered by enterprise-level AI platforms. The attorney-client privilege applies only to communications between a client and his or her attorney. As Judge Rakoff observed, AI tools are not attorneys, and many expressly state they will not provide legal advice when prompted. This further underscores the need to consult with counsel before using AI to discuss legal strategy or advice.
Practical Guidance for Using AI
As courts begin to address how AI fits into traditional discovery doctrines, organizations should take proactive steps to manage risk. This decision highlights the need for clear AI governance policies and employee training, particularly in the context of pending or threatened litigation. Anyone who uses AI must understand how AI providers handle user data and access. Enterprise-level AI platforms are critical for maintaining confidentiality.
Heppner was among the first court decisions to address these issues directly, but it will not be the last. We will continue to closely monitor this area of law to understand how businesses can better protect themselves from potential risks of AI use.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]