Use Of ChatGPT In Federal Litigation Holds Lessons For Lawyers And Non-Lawyers Everywhere

SS
Seyfarth Shaw LLP

Contributor

With more than 900 lawyers across 18 offices, Seyfarth Shaw LLP provides advisory, litigation, and transactional legal services to clients worldwide. Our high-caliber legal representation and advanced delivery capabilities allow us to take on our clients’ unique challenges and opportunities-no matter the scale or complexity. Whether navigating complex litigation, negotiating transformational deals, or advising on cross-border projects, our attorneys achieve exceptional legal outcomes. Our drive for excellence leads us to seek out better ways to work with our clients and each other. We have been first-to-market on many legal service delivery innovations-and we continue to break new ground with our clients every day. This long history of excellence and innovation has created a culture with a sense of purpose and belonging for all. In turn, our culture drives our commitment to the growth of our clients, the diversity of our people, and the resilience of our workforce.
You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions...
United States Litigation, Mediation & Arbitration
To print this article, all you need is to be registered or login on Mondaq.com.

You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist. The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions. This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,1 the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart. The airline filed a motion to dismiss the case. The plaintiff's lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff's attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff's counsel claimed. The Court directed plaintiff's counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff's lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions "may not be inclusive of the entire opinions but only what is made available by online database [sic]."2 Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.3

In response, the Court stated that "[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations"4, using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example. The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that "there has been no such case before the Eleventh Circuit" and that the docket number shown in the plaintiff's submission was for a different case.5 The Court noted that "five [other] decisions submitted by plaintiff's counsel . . . appear to be fake as well." The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff's counsel show cause as to why he should not be sanctioned for citing "fake" cases.6

At that point, plaintiff's counsel revealed what happened.7 The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer "had no reason to doubt." The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT "provided its legal source and assured the reliability of its content." He explained that he had never used ChatGPT for legal research before and "was unaware of the possibility that its content could be false." The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he "had no intent to deceive this Court or the defendant." The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded "[y]es," one of the cases "is a real case," and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.8

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well. For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply "Are the other cases you provided fake"?9 Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI's response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents. But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software's output. Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.

Footnotes

1. Case No. 22-cv-1461 (S.D.N.Y.).

2. Id. at Dkt. No. 29.

3. Id.

4. Id. at Dkt. No. 31.

5. Id.

6. Id.

7. Id. at Dkt. No. 32.

8. Id.

9. Id.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More