The New Hampshire Bar News recently published articles by Tom Jarvis and Misty Griffith that considered the adoption of generative "artificial intelligence" (AI) tools into the legal profession. As Jarvis and Griffith noted, data driven generative AI tools such as Open AI's ChatGPT, Google's Bard, Casetext's Co-Counsel, and the like, have prompted some concern in the legal field that such tools might replace human lawyers. In our view, the technology does not pose that danger yet, but Jarvis and Griffith rightly pointed out some of the concerns the technology poses for the legal profession: unauthorized practice of law by in-court AI assistants; ABA resolutions addressing accountability and transparency; etc. Our goal in this article is to provide some further guidance to courts and bar associations as they consider AI in the practice of law.

Generative AI applications enable users to augment existing workflows with a high degree of automation and digital assistance. Need a letter written? ChatGPT can do that. Need research done? Bard can help with that. Such platforms are powerful, readily available, and provide significant incentives for adoption, with a low cost of entry due to user-friendly plain language interactivity and affordable use options. Incentives for the adoption of AI technologies are clear: do more, with less, faster.

But there are tradeoffs that limit the utility of these tools to the legal profession. Generative AI applications can draft documents, but attorneys are in danger of violating their obligation to confidentiality, unless the client provides informed consent. Microsoft and Google are behind or partnered with the two most popular natural language generators, ChatGPT and Bard, and their access to data entered into those applications is a potential problem. Although both companies are exploring ways to address this concern, it is far from certain that their efforts will result in a product lawyers can safely use. Additionally, generative AI can conduct legal research and provide a synopsis, but that research may be unreliable if not checked by a human attorney. A New York attorney recently discovered this when the court and opposing counsel could not find many of the decisions and quotations in a brief he drafted relying on ChatGPT for legal research.

So attorneys, bar associations, and courts are justifiably cautious in their consideration of AI. Having said that, a categorical response to AI due to the emergence of a new technology is akin to a categorical response to "transportation" because of the emergence of roller skates. While a pair of roller skates and an airplane can both transport a person from one place to another, the critical linkage between the two modes of transport is not the movement of humans, but rather the unique function of each to humans. Similarly, bar association and courts should not make categorical declarations about AI because each AI technology is unique. Additionally, as a practical matter, attorneys will not be able to avoid such technologies in the practice of law. Developers will upgrade critical legal applications to include AI into their functionalities, legal matters involving the use of AI technology will become more prevalent, and clients will develop expectations in accordance with business standards grown from widespread adoption of AI tools.

For example, the DoNotPay dispute that Jarvis wrote about featured a tech startup offering an ear-piece to a pro se litigant, who would supposedly receive real-time input from the company's AI application in court. For obvious reasons, this prompted allegations that the company was engaging in the unauthorized practice of law. However, courts should avoid taking a categorical approach. Rather, they should take notice of this development, as it speaks to a demand among some litigants for a lower-cost alternative to retaining an actual attorney in court and raises a legitimate question: In the same way that courts have published form documents to assist pro se litigants, is there a way for courts to govern AI applications to assist pro se litigants? Could a list of vetted, pre-approved applications enhance access to justice without engaging in the unauthorized practice of law? It is an open question now, but future AI technologies may provide answers. Courts should be both vigilant and open-minded as such technologies enter the market.

Similarly, public AI applications like ChatGPT and Bard potentially threaten client confidentiality and a lawyer's duty of reasonable care. But future applications might be more conducive to client representation. Imagine a private generative AI application that lives on a firm's server. It has access to the firm's database of documents and its Westlaw account; it does not report the data it accesses to an outside party. Requests to that application for research and document production would still require oversight by an attorney, but could also maintain client confidentiality and produce more reliable first drafts than current applications. This is a possible future tool for lawyers. Bar associations should avoid making categorical rules about AI and should instead provide resources and guidelines to help attorneys responsibly adopt AI applications that assist them provide better representation to clients.

Consistent with our thoughts here, both Jarvis and Griffith mention using AI tools to augment the practice of law and address unmet civil legal needs. We hope that New Hampshire courts and the New Hampshire Bar Association develop an active interest in AI tools that can help both pro se litigants access the court system and lawyers provide superior counsel to their clients. Human lawyers are not going anywhere any time soon, but they will need guidance on how to adapt to and adopt these new generative technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.