ARTICLE
16 October 2024

Generative AI In The Legal Profession

As generative AI rapidly transforms industries, legal professionals face the challenge of harnessing its potential while managing significant risks.
United States Technology

Introduction

Prompt:

Provide a thesis statement for the attached document, in 75 words or less.

Response:

As generative AI rapidly transforms industries, legal professionals face the challenge of harnessing its potential while managing significant risks. Integrating AI into legal practices requires careful consideration of its benefits and the confidentiality concerns it raises. Recent guidance from the American Bar Association (ABA) and the U.S. Patent and Trademark Office (USPTO) equips lawyers to make ethical, informed decisions about AI adoption while ensuring technological competence, and to navigate emerging issues surrounding AI-assisted inventorship.

Rapid Technological Advances

While the field of artificial intelligence has been around for decades, the surge in public awareness of generative AI technology generally tracks with the release of OpenAI's Generative Pre-trained Transformer (GPT) version 3.5 large language model (LLM), which was included as a part of the public launch of ChatGPT in November, 2022.1

OpenAI and others have continued to refine their AI systems and products to provide additional functionality along with improved speed, capacity for logical reasoning, and data throughput capability. Various generative AI products are widely available to the public and can quickly generate vast amounts of text, debug code, create images, and much more.

As AI systems continue to improve, it should come as no surprise that engineers, scientists, other tech-savvy individuals, and even lawyers are interested in leveraging generative AI technology. Unfortunately, problems arise when technology advances faster than regulation, corporate oversight, and the public's understanding of how the underlying technology actually works.

Problems For Early Adopters

Within five days of launching to the public, ChatGPT reportedly had over a million active users, with the number growing to over a hundred million active users in just two months.2 As public awareness about the capabilities of generative AI technology grew, some companies, like Amazon,3 began implementing bans proactively. Other companies, like Samsung,4 began monitoring the use of generative AI tools and warned employees about the importance of protecting intellectual property when using free generative AI tools.

The warnings were apparently ineffective, and Samsung eventually implemented a ban5 on the use of ChatGPT after learning about employees using it to debug code, optimize a test sequence, and summarize a meeting transcription.6 Other companies like Apple7 followed Samsung's lead and also implemented bans.

An important lesson to be learned at Samsung's expense is that this technology is so powerful that it can make people act against their better judgement, especially when they don't fully understand how it works. This power also has the unfortunate effect of causing inexperienced users to place too much trust in the outputs of generative AI tools. The term "hallucination" refers to AI outputs that appear plausible but may not be factually accurate. Hallucination can be tuned out of AI systems in a variety of ways, but most free AI tools offer little control over the amount of creativity an AI can use. This means that AI tools can sometimes be confidently incorrect.

In what is believed to be the first instance of AI hallucination being detected in a legal proceeding, lawyers for the plaintiff in Mata v. Avianca8 filed opposition documents that were drafted with the assistance of ChatGPT which, unfortunately, included citations to nonexistent court cases. Moreover, copies of the fake cases, also generated with the assistance of ChatGPT, were submitted to the court. This led to predictable results, and the lawyers involved were sanctioned.

Generative AI tools are becoming integrated into search engines,9 word processing software,10 operating systems,11 and more. Despite the substantial risks involved with improper use, this technology cannot be ignored, and thoughtful consideration is required.

Risk Assessment and Management

It is important to understand that generative AI tools are not categorically unsafe. Banning generative AI tools may help mitigate risk, especially in the short term, but the reality is that AI tools offer significant advantages that are hard to ignore. Importantly, even if a company bans generative AI tools on corporate devices, AI tools can still be accessed, for free, using personal devices. Thus, risk still exists despite bans. Deciding to ban or otherwise limit the use of generative AI technology should include thoughtful consideration of how AI tools work, what the benefits of using them are, and what risks actually exist.

In contrast to the situation that Samsung was concerned about with its employees, many generative AI tools can be run on private servers with the ability to disable external model training.12 This ensures that data exchanged with the AI system remains confidential and that interactions with the AI system do not influence the outputs of similar tools used by others.

Using privately-hosted AI systems helps mitigate risk, discourages unauthorized use of other AI tools, and affords broad creative freedom in a secure environment. However, there are other ways to leverage generative AI technology while managing risk. For example, generative AI tools could be permitted for low-risk tasks but banned for higher-risk tasks. This approach allows experience to be gained with generative AI systems without exposing sensitive data. In some cases, free web-based AI tools could be allowed for lowrisk tasks, even if they employ automated model training.

In addition to privately-hosted AI systems and free web-based AI tools, specialized and task-specific AI tools are available from various software providers, including AI tools designed for intellectual property practitioners. As the USPTO emphasizes in recent Guidance,13 "When practitioners rely on the services of a third party to develop a proprietary AI tool, store client data on third-party storage, or purchase a commercially available AI tool, practitioners must be especially vigilant to ensure that confidentiality of client data is maintained."14

Generative AI tools are rapidly evolving, and the risks associated with using them necessitate thoughtful consideration. It is important for lawyers to keep up and understand how the changes in technology impact their practice. Whether or not individual lawyers are ready to embrace the technology, the American Bar Association (ABA) and the USPTO both appear to recognize that generative AI tools are here to stay.

Guidance From the American Bar Association (ABA)

In July, the ABA released Formal Opinion 512 on the subject of generative AI,15 which includes discussion of competence, confidentiality, communication, meritorious claims and contentions, candor toward the tribunal, supervisory responsibilities, and fees.

A key takeaway from this opinion involves competence under Model Rule 1.1; lawyers should be aware of generative AI tools relevant to their practice so that they can make informed decisions about whether or not to use them.16 The ABA notes the importance of either acquiring a reasonable understanding of the benefits and risks of generative AI tools, or drawing on the expertise of others who can provide guidance about the AI tool's capabilities and limitations.17 Even though much of the guidance is directed to lawyers who are choosing to utilize generative AI technology, the ABA also contemplates that lawyers may eventually have to use generative AI tools in order to competently complete certain tasks for clients18 as the technology becomes as ubiquitous as email and electronic files.

The guidance relating to competence under Model Rule 1.1 also emphasizes the importance of lawyers conducting independent verification of generative AI tool outputs, noting the potential for hallucination, mistakes, and other issues. Accordingly, outputs from even the most refined AI systems still necessitate independent attorney review.19

Another key takeaway involves confidentiality under Model Rule 1.6. Here, the ABA explains that lawyers must evaluate the risk of inadvertent disclosure of data input into generative AI systems both externally and internally. In particular, a client's informed consent is required when using generative AI tools that are capable of disclosing information relating to the representation of a client, either directly (e.g., a software developer refining the AI system) or indirectly (e.g., used to train a model accessible by others).20

In addition to understanding how to ethically and responsibly leverage generative AI in their own practice, attorneys also need to be cognizant of the impact that the technology is having on their clients as regulation and caselaw catch up.

Guidance From the USPTO Regarding Inventorship

In February, the USPTO provided Guidance on inventorship involving AI-assisted inventions.21 The most important takeaway is that AI systems cannot be inventors or joint inventors, but AI-assisted humans can.

The question of whether an AI system could be an inventor for a patent application was initially answered by Thaler v. Vidal,22 in which the Federal Circuit held that an inventor must be a "natural person." However, Thaler involved two utility patent applications where an AI system called DAUBUS (Device for the Autonomous Bootstrapping of Unified Sentience) was listed as the sole inventor, and the Federal Circuit specifically noted that "we are not confronted today with the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection."23

The USPTO Guidance is largely framed around the Federal Circuit's decision in Thaler and clarifies that joint inventors or coinventors must also be natural persons.24 Accordingly, listing an AI system as an inventor or joint inventor on an Application Data Sheet, an inventor's oath or declaration, or a substitute statement will result in improper inventorship.

However, if a natural person significantly contributed to a claimed invention, even in scenarios where AI systems were instrumental in the creation of the invention, the use of an AI system does not necessarily disqualify a natural person as an inventor.25 Interestingly, the USPTO Guidance explicitly notes that AI systems are capable of performing acts that, if instead performed by natural persons, could constitute inventorship.26 This appears to serve as the rationale for assessing AI-assisted inventorship under the lens of joint inventorship, even where only a single AI-assisted natural person is involved.

In order to determine whether or not an AI-assisted natural person has made a significant contribution to an invention, the USPTO Guidance applies the factors from Pannu v. Iolab Corp.,27 noting that "Although the Pannu factors are generally applied to two or more people who create an invention (i.e., joint inventors), it follows that a single person who uses an AI system to create an invention is also required to make a significant contribution to the invention, according to the Pannu factors, to be considered a proper inventor."28 According to the Pannu factors, each inventor must:

  1. contribute in some significant manner to the conception or reduction to practice of the invention,
  2. make a contribution to the claimed invention that is not insignificant in quality, when that contribution is measured against the dimension of the full invention, and
  3. do more than merely explain to the real inventors well-known concepts and/or the current state of the art.29

While a plain reading of the first Pannu factor indicates that a significant contribution to conception or reduction to practice is required, the USPTO Guidance emphasizes that conception (or simultaneous conception and reduction to practice) is required, and that reduction to practice alone is insufficient.30

Still operating under the lens of joint inventorship, the USPTO Guidance notes that a named inventor does not have to contribute to every claim, but also states that every claim must have been invented by at least one named inventor.31 Support for this statement appears as an endnote citing to 35 U.S.C. 115(a), which requires that an oath or declaration include "the name of the inventor for any invention claimed in the application."32 It is worth noting that the phrase "every claim" does not appear in 35 U.S.C. 115(a), and highlysubjective and reasonable arguments can be made about how dependent claims should be characterized.

The USPTO Guidance also states: "In other words, a natural person must have significantly contributed to each claim in a patent application or patent. In the event of a single person using an AI system to create an invention, that single person must make a significant contribution to every claim in the patent or patent application."33

In view of this, applicants and practitioners who work with AI-assisted inventors should carefully consider whether their processes and procedures adequately document inventor contributions and align with their drafting and prosecution strategies. In some cases, this may involve a calculated decision to either:

avoid claiming concepts conceived by AI systems,

alter claim drafting strategies to ensure significant human contribution to every claim, or

proactively prepare for arguments emphasizing the human contribution to independent claims as the basis for significant contribution to dependent claims.

The USPTO Guidance acknowledges that there is no bright-line test for determining whether or not an AI-assisted human has made a significant contribution, and provides a non-exhaustive list of principles to help inform the Pannu analysis with AI-assisted inventions.34

Along with several helpful examples, the principles section of the Guidance also includes discussion of how prompt engineering may rise to the level of a significant contribution, and acknowledges that humans who design, build, or train AI systems could be considered inventors.35 This underscores the importance of understanding how AI systems work and how inventors interact with them; beyond the concerns of mitigating risk and keeping data confidential, using an external vendor for AI services could also potentially affect inventorship.

Conclusion

As generative AI continues to evolve, its integration into legal practice is not just inevitable. However, the power of this technology comes with significant responsibilities. We must approach AI with a combination of curiosity, caution, and ethical rigor. The guidance from the ABA and USPTO highlights the need for ongoing education and vigilance in using these tools. Lawyers must not only understand the capabilities and limitations of AI but also actively shape its application to ensure it serves justice without compromising client confidentiality or the integrity of the legal process.

To view the full article click here.

Footnotes

1. https://openai.com/index/chatgpt/

2. https://wisernotify.com/blog/chatgpt-users/

3. https://moveo.ai/blog/companies-that-banned-chatgpt/

4. https://economist.co.kr/article/view/ecn202303300057

5. https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staffafter-leak

6. https://adguard.com/en/blog/samsung-chatgpt-leak-privacy.html

7. https://fortune.com/2023/05/19/apple-restricts-chatgptemployee-data-leaks-iphone/

8. Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023)

9. https://blog.google/products/search/generative-ai-googlesearch-may-2024/

10. https://www.microsoft.com/en-us/microsoft-365/enterprise/copilot-for-microsoft-365

11. https://www.apple.com/apple-intelligence/

12. https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy

13. Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office, Federal Register, Vol. 89, No. 71, 25609-25617 (April 11, 2024)

14. Id. at 25617

15. Formal Opinion 512: Generative Artificial Intelligence Tools, American Bar Association, Standing Committee on Ethics and Professional Responsibility (July 29, 2024)

16. Id. at 5

17. Id. at 4

18. Id. at 5

19. Id. at 3-4

20. Id. at 6-7

21. Inventorship Guidance for AI-Assisted Inventions, Federal Register, vol. 89, no. 30, 10043-10051 (February 13, 2024)

22. Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022), cert. denied, 143 S. Ct. 1783, 215 L. Ed. 2d 671 (2023)

23. Id. at 1213

24. Inventorship Guidance for AI-Assisted Inventions. at 10045

25. Id. at 10046

26. Id. at 10045

27. Pannu v. Iolab Corp., 155 F.3d 1344, 1351 (Fed. Cir. 1998)

28. Inventorship Guidance for AI-Assisted Inventions, at 10048

29. Pannu v. Iolab Corp., at 1351

30. Inventorship Guidance for AI-Assisted Inventions, at 10047

31 Id. at 10048

32. 35 U.S.C. 115(a)

33. Inventorship Guidance for AI-Assisted Inventions, at 10048

34. Id. at 10048-10049

35. Id.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More