- Wiley attorneys examine AI impact on privileged communication
- Prepare by refining AI use policies, review vendor data terms
Generative artificial intelligence is easy to use to draft documents, answer questions, conduct research, or complete other tasks, but it can pose unique problems to attorneys.
Privileged communications are legally protected from disclosure, but they lose that protection if shared with someone else later. The advent of generative AI tools raises several unique questions about the concept of privilege for companies, which can take action now to mitigate risks.
When lawyers talk about privilege, they usually refer to two distinct types of protection: attorney-client privilege and work product immunity.
Attorney-client privilege protects communications made between privileged persons (attorney, client, or agent) in confidence for the purpose of obtaining or providing legal assistance for the client. Provided their confidence is maintained, these are privileged communications, and their disclosure can't be compelled.
By contrast, work product immunity is narrower. It only protects records prepared by or for a party in anticipation of litigation or for trial.
Both types of protection are forfeited where there is no expectation of privacy and their content isn't protected work product. Even if a communication or file is protected, that protection can be waived if an attorney or client relays it to a non-party, thereby forfeiting any expectation of privacy.
These principles become critical when evaluating AI tools, as they could directly or indirectly reveal confidential, proprietary, or even privileged information.
Generative AI Tools
By their nature, public AI tools—built for and servicing up to millions of different users—pose a significant risk to the privilege of any information or records shared with the application.
An attorney may need to input privileged information for a public AI tool to conduct effective legal research or draft responsive briefs. Doing so risks waiving privilege, because many of these tools reserve the right to store and later use all input data.
Even if an attorney is aware of the risks, a paralegal or other individual working at the direction of an attorney may come to rely on public AI tools and unwittingly expose privileged information.
An enterprise AI tool, built for and servicing only internal users, mitigates such risks by ensuring any information shared for training or in the form of user inputs stays within the organization. But enterprise tools aren't without their own risks.
Allowing an enterprise tool to use legal documents as training data could render that content available to the company at large. A company could then unintentionally waive privilege by allowing employees without a "need to know" to have access to any privileged information and work product.
In-House Challenges
In-house lawyers who wear dual hats are particularly at risk of waiving privilege if they use these tools. Materials developed by a business person aren't privileged, but what if that person is also a lawyer? If they use AI to create what's believed to be privileged content, is it entitled to protection?
Given how easy these tools can be to use, it's possible that lawyers will have less to do on the legal side of the house and shift more firmly toward business functions, further eroding possible privilege protections their presence may allow. Conversely, AI tools may tempt non-lawyers to delve into more legal areas without entitlement to any sort of protection, opening a company to additional disclosures.
AI tools can be further used to assist with core legal functions, such as using a company's prior contracts to generate new agreements. If a lawyer uses the program, are the inputs and outputs privileged? What about a non-lawyer? And does it matter if the agreement is totally new, or if the AI tool is merely suggesting specific provisions based on pre-drafted content and information?
Many tools exist for just that purpose, but the effect on any privilege protection when using such a tool is still unknown. There are, however, some ways to prepare.
Refine AI use policies now. If your company permits AI usage, it may be prudent to wall off legal work product from any enterprise AI tools so that their content can't be shared with employees outside of those who "need to know."
Consider making a tool just for attorneys if you're looking to leverage AI in legal work such as contract drafting. Consider relying on outside counsel and their tools to ensure privilege isn't inadvertently waived.
Review vendor policies on retaining input data carefully. Privilege protection relies on a reasonable expectation of privacy. If a vendor's terms reveal it is sending all inputs back to the vendor, even just for quality control, privilege may be waived.
Companies should take another look at their vendor agreements as well as their vendors' AI policies. To the extent there are concerns with vendor terms or policies, it's better to address these issues now rather than after a potential waiver of privileged information.
Keep abreast of how employees are using AI tools. This shouldn't be a set-it-and-forget-it policy. Technology changes rapidly, and new legal developments on these issues could happen any day. Keeping track of which employees use AI products—and how—will ensure you don't inadvertently waive privilege.
Don't wait for a subpoena or discovery request to start thinking about these privilege issues. As AI tools become more ingrained in your company's day-to-day operations, you can avoid a lot of problems down the road by addressing potential privilege issues now.
Originally Published by Bloomberg Law
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.