I. Introduction

Artificial intelligence ("AI") is one of the most rapidly developing components of the technology sector. This is especially true within the last five years, as evidenced by the increased incorporation of AI technology into various commercial industries.1 Now, however, conversational AI, a technology designed to produce human-like responses, is threatening to emerge into professional industries and other specialized areas. While these fields have relied on some forms of AI for assistance in the past, AI platforms such as ChatGPT, Harvey and Kore are being presented as supplemental resources in lieu of professional human support.2 As attractive and convenient as this technology may seem, it is critical to remain cognizant of its shortcomings and the potential repercussions associated with users' overreliance.

II. What is GPT Technology?

There are four primary types of AI systems: reactive machines, limited memory machines, theory of mind and self-aware AI. Generative pretrained transformers ("GPT"), are a family of neutral network models that share and rely on internet data to generate responses to human input. ChatGPT is a "chatbox" powered by these networks and "trained" using Reinforcement Learning from Human Feedback.3 Language processing systems such as ChatGPT are generally classified as limited memory machines because of their restricted ability to retain and implement data previously analyzed. Limited memory machines tend to require greater amounts of information from the user in order to provide more precise responses. These platforms are designed to generate human-like responses to questions posed by users. Since its inception in November 2022, ChatGPT has been highly praised by some given its sophisticated design and seemingly useful ability to answer questions quickly. However, it has been harshly criticized by others given the lack of useful, substantive information it provides when posed with specialized tasks such as legal research, writing, and analysis.

III. GPT Technology in the Law – Using GPT Technology to Perform Legal Analysis.

Unsurprisingly, ChatGPT and similar systems have been tested by various legal institutions to determine their effectiveness from a practical standpoint.4 In some instances, the platforms produced sound and accurate answers. However, other tests revealed the evident flaws with the technology's ability to provide specialized legal analysis.5

For instance, in February 2023, the ABA Journal published an article illustrating the shortfalls in ChatGPT's computing system.6 This test was centered on a real-life California conservation's petition to the State's Fish and Game Commission seeking to include four bumblebee species on the endangered species list. However, the list only included birds, mammals, fish, amphibians, reptiles or plants. The conservation argued that bees should be included on the list as "fish" because they are invertebrates. Multiple alliances and agricultural groups objected to the petition and eventually filed a lawsuit. The conservation lost at the trial court but prevailed on appeal. The California Supreme Court denied a petition to review, Chief Justice Cantil-Sakauye noted that "our decision not to order review will be misconstrued by some as an affirmative determination by this court that under the law, bumblebees are fish."7

Using this case, the ABA created an assignment for ChatGPT to complete, which mimicked general tasks often presented to litigators. The test was simple, as it required inserting two prompts into the ChatGPT chat box: (1) "Draft a brief to the California Supreme Court on why it should review the California Court of Appeal's decision that bees are fish"; and (2) "Draft a brief to the California Supreme Court on why it should not review the California Court of Appeal's decision that bees are fish."8

While both responses contained persuasive phrases and quippy retorts, neither included substantive legal arguments for or against their respective positions.Rather, the memoranda were flooded with conclusory statements based solely on the search terms in the prompts. In other words, even though each response contained some sequence of illogical legalese, neither hinted at the central issue of the case nor mentioned relevant precedent.

Similarly, the SC Lawyer recently featured an article challenging the parameters of ChatGPT's analytical capabilities. There, multiple prompts were provided to the platform to test the accuracy of its ability to perform legal research and draft sound legal arguments. Like the responses in the ABA's test, much of the information provided was only partially relevant and avoided several critical issues.

On one occasion, the platform was asked to draft a legal memorandum on the enforceability of non-disclosure agreements in South Carolina while considering certain key facts that could impact the analysis.

Despite citing authority with confidence and taking the appearance of a well-formatted legal brief, the response failed to apply any of the basic principles taught in 1L legal research and writing classes. For example, ChatGPT inaccurately recited the elements of the law at issue, failed to consider relevant precedent, and cited authority that was contrary to the position being asserted.9 That said, the SC Lawyer found that ChatGPT isn't all bad. For example, it found the program was effective for other tasks, such as paraphrasing specific sections of legal text. But in sum, conversational AI paled in comparison to lawyers in terms of effective—and accurate—advocacy.

Despite these shortcomings, the technology itself is not futile. In fact, the machine-learning model powering the platform is highly sophisticated and is considered an incredible advancement for AI as a whole.10

Some law firms have gone as far as incorporating ChatGPT's sister platform, "Harvey," into their list of resources available to attorneys.11 In February, Allen & Overy announced its partnership with Harvey, and indicated that the system would be used to "generate insights, recommendations and predictions based on large volumes of data, [to enable] lawyers to deliver faster, smarter and more cost-effective solutions to their clients."12 But even in this case, it was expressly noted that the output from the technology needed to be carefully reviewed by an attorney.

IV. The Problem

The central concern for these systems is the user's overreliance on the accuracy of the data produced. As illustrated in the tests above, ChatGPT and similar platforms have the capability of producing human-like responses instantaneously. Moreover, the responses are generally computed in a fluent, persuasive tone, creating a trustworthy impression on the reader. However, these computed responses tend to lack full consideration and understanding of the more subtle and nuanced legal issues within the sources from which the program relies. Another fundamental issue with the current technology is its inability to perform subjective analyses, such as determining whether an act is reasonable or weighing a set of legal factors based on a set of facts.

Additionally, there have been several instances of flaws in the platforms' ability to accurately cite legal sources and authority.13 A startling pitfall of the technology is its tendency to create citations that are properly formatted but wholly inaccurate. Indeed, these platforms are capable of generating fake citations.14 An unfortunate example recently transpired in the Southern District of New York, where attorneys cited six cases that were later found by the court to be nonexistent.15 Ultimately, the court imposed sanctions on both attorneys after finding they acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court."16 Further, in June 2023, a U.S. District Judge for the Northern District of Texas issued an order requiring attorneys to certify that artificial intelligence had not been used in the drafting of their briefs without being vetted by an attorney.17 Developments such as these contradict the "time-saving" appeal associated with this technology and strain the user's ability to rely on the information provided.18

Even assuming the requested outputs are accurate, use of this technology by non-attorneys could have broad implications. Of course, it is never wise to engage in legal discussions or litigation without the advice of licensed counsel. And though GPT platforms may provide "lawyer-like" responses, they pose a danger to the public at large to the extent people rely on these programs to provide legal advice. Moreover, using the technology to act as an attorney for someone who is not licensed will undoubtedly lead to issues stemming from the unauthorized practice of law.19

Aside from accuracy concerns, attorneys should consider the implications ChatGPT may have on the confidentiality of client information. The comments to Rule 1.6 of the ABA's Model Rules of Professional Conduct provide "a fundamental principle in the client-lawyer relationship is that, in the absence of the client's informed consent, the lawyer must not reveal information relating to the representation."20 The comments go on to explain that this provision "prohibits a lawyer from revealing information relating to the representation of a client. This prohibition also applies to disclosures by a lawyer that do not in themselves reveal protected information but could reasonably lead to the discovery of such information by a third person." In this regard, using GPT systems to generate legal work specific to a case could potentially be interpreted as a relinquishment of confidentiality due to the limited learning capabilities of these machines, which retain data to enhance the accuracy of the AI.

Nevertheless, while platforms like ChatGPT and Harvey raise several unprecedented questions and may not currently be capable of drafting a persuasive and accurate legal brief, they aren't worthless. As noted below, the technology can have an immediate positive impact within the legal community.

V. The Potential Benefits

The technology powering GPT systems have demonstrated its effectiveness with non-analytical tasks. Similar to search engines, GPT systems can quickly review large amounts of information while simultaneously computing descriptive outputs. As such, this technology can save time by breaking down lengthy legal sources into relatively short pieces of text. Moreover, the technology may prove to be useful for onerous tasks such as document review.

GPT technology's computing capabilities also show promise of evolving into systems capable of generating legal documents such as contracts, general pleadings, and basic written discovery. For instance, as illustrated in a study conducted by Harvard's Center on the Legal Profession, GPT systems can generate accurate pleadings such as complaints and legal documents such as contracts. One potential explanation for the technology's success in drafting these types of documents is the lack of comprehensive analysis needed to perform the tasks.

Additionally, the technology can formulate potential arguments, summarize legal principles and assist with factual research. In other words, it can be a good starting point or sounding board before starting on a particular task. In one successful test, for example, a chat box generated a list of discovery questions based on the input of specific facts.21 Similar tests revealed the technology's ability to draft deposition questions.22 In short, ChatGPT and similar systems can assist with simple and specific legal tasks. Nevertheless, the output should always be carefully reviewed for discrepancies.

VI. Conclusion

The introduction and advancement of GPT technology is impressive and will continue to rapidly develop. Attorneys can save time and money by incorporating aspects of GPT AI into their legal practice; however, the current systems in place tend to produce unreliable results. With that said, certain pieces of the technology may be useful to practitioners immediately, especially simple tasks that involve organizing large amounts of information or drafting basic legal templates. Regardless of one's take on these platforms, it is inevitable that more refined versions of this technology will continue to be introduced into the legal community.

Footnotes

1.See Examples of Artificial Intelligence (AI) in 7 Industries, EMERITUS (Mar. 4, 2022),  https://emeritus.org/blog/examples-of-artificial-intelligence-ai/  (explaining that the Financial Services, Insurance, Healthcare, Life Sciences, Telecommunications, Energy, and Aviation industries have each increased their reliance on AI in recent years).

3. Introducing ChatGPT, OpenAI (Nov. 30, 2022),https://openai.com/blog/chatgpt.

4.Blair Chavis, Does ChatGPT Produce Fishy Briefs, ABA JOURNAL (Feb. 21, 2023),  https://www.abajournal.com/web/article/does-chatgpt-produce-fishy-briefs see also A&O Announces Exclusive Launch Partnership with Harvey, ALLEN & OVERY (Feb. 15, 2023),  https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey.

5.Chavis, supra note 4; see also Andrew Perlman, The Implications of ChatGPT for Legal Services and Society, HARV. L. SCH. (Mar. 2023),  https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/.

6.Chavis, supra note 4.

7.Perlman, supra note 5.

8. Chavis, supra note 4.

9. Eve Ross & Amy Milligan, What Can ChatGPT Do, and Should We Let It?, SC Lawyer at 35–39 (May 2023).

10. Johanna Leggatt, What Is ChatGPT? A Review of the AI in Its Own Words, Forbes (Mar. 22, 2023),  https://www.forbes.com/advisor/business/software/what-is-chatgpt/.

11. Allen & Overy, supra  note 4.

12. Id.

13.Perlman, supra note 5.

14.Aaron Welborn, ChatGPT and Fake Citations, Duke Univ. Libraries (Mar. 9, 2023),  https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/.

15. The attorney in this case is facing sanctions from the court despite affirming he did not possess ill-intent or have knowledge of ChatGPT's ability to create fake citations.

16. Sarah Merken, New York lawyers sanctioned for using fake ChatGPT cases in legal brief, Reuters (June 26, 2023).

17. Jacqueline Thomsen, US judge orders lawyers to sign AI pledge, warning chatbots 'make stuff up', Reuters (June 2, 2023).

18. In other words, such outputs may actually create more  work for the user. Not only must the user verify the information provided, but he or she must also determine whether the source exists at all.

19.Perlman, supra  note 5.

20. Rule 1.6 of the ABA's Model Rules of Professional Conduct, Comment 2.

21. Id.

22. Id.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.