Artificial Intelligence ("AI") and Generative AI ("GAI") may be complicated, annoying, and intimidating to some, as well as generally problematic.1 However, AI and GAI are largely now unavoidable in the practice of law. Lawyers must therefore grapple with the ethical implications of their use in the practice of law. Here are some practical observations and suggestions on how best to engage with AI and GAI that may assist attorneys in complying with their ethical and professional obligations.
As a threshold matter, you may already be using AI without knowing it. For example, when typing an email in Outlook, if you have ever accepted a suggestion on how to complete a sentence or used a "suggested reply" from Google or Outlook, or ever used Alexa, Siri, or other "virtual assistants," you have already used AI!
The use of AI in your professional lives implicates ethical and professional obligations, including those under the Rules of Professional Conduct.
There is a major difference between private use and professional use. The use of AI in your professional lives implicates ethical and professional obligations, including those under the Rules of Professional Conduct. Further, many jurisdictions and judges have issued specific orders relative to the use of GAI and AI in their courtrooms.2 Over the past two years, many state bar associations, including Florida,3 California,4 Michigan,5 Pennsylvania,6 New York,7 and New Jersey,8 have issued guidance regarding the use of AI in the legal profession. Likewise, the American Bar Association issued a Formal Ethics Opinion on Generative Artificial Intelligence in July 2024.9 The guidance offered across the country generally coalesces around the following practical tips to help lawyers to use GAI and AI ethically and professionally.10
1. Know your GAI tool
At the outset, it helps to understand what GAI is – and lawyers unquestionably have an ethical duty to know and understand the technology utilized in the practice.11
GAI (as opposed to simply AI) is a "deep-learning model" that takes data from all available sources and "learn[s] to generate statistically probable outputs when prompted."12 This means that the GAI tool is generating a response based on all of the data it has access to, which likely includes the data you are inputting. As a result, it is important to know the sources of information that the GAI tool you are using relies upon, as well as whether it is learning from the data you put into it, and how it stores that information. For example, if you are using a public GAI product like Chat GPT, Chat GPT will "learn" from the information you put into the prompt. This means that any confidential, client, or proprietary informa- tion that has been inserted into the prompt becomes part of the public domain. For this reason alone, as general rule, lawyers should not rely on GAI tools designed for the public for the practice of law.13 Lawyers should not put any confidential information including the identity of the client or identifying details about the case into a GAI product without taking steps to ensure that product adheres to stringent data security, confidentiality, and retention protocols: doing so is likely a breach of your obligation to ensure client confidentiality.14
And importantly, GAI does not pull from the data sources alone, but uses those data sources to generate new content. This is why the responses may include "hallucinations," defined as an "algorithmic pattern misperceptions that create inaccurate or nonsensical output."15 Put bluntly, hallucinations can be fake cases or statutes. Courts across the country have uniformly issued the same message to all those who come before them: "verify AI-generated content in legal submissions!"16 Lawyers must critically review and validate all output of generative AI before relying upon it.
This remains true even when using legal research search engines who promote the fact that the results cannot contain hallucinations because the source of the GAI's learning is the data in the legal research server. While the responses may not be true hallucinations because they are cases or statutes that exist, sample searches have yielded mixed results in terms of applicability or appropriate- ness of the output. Use GAI as a starting point for research, not the end point.
2. Obtain Client Disclosure and Consent Regarding the Use of GAI
When lawyers use GAI and AI in their practices, it is a best practice to disclose this to their client. There are certain situations where such disclosure would be mandatory.17 This stems, in part, from the requirement in Rule of Professional Conduct 1.6 that a lawyer must inform their client about the "means by which the client's objectives are to be accomplished."
As a practical matter, one way of doing this is to include a paragraph about the use of GAI/AI in the client engagement letter. The included language should inform the client about the benefits, risks, and limitations of the use of GAI, and how that lawyer or firm intends to use it in connection with their matter.
At all times, the lawyer is responsible for exercising appropriate professional judgment and complying with all ethical rules and laws.
3. Ensure Ethical Billing for the Use of GAI
It is unquestionable that the use of AI makes many tasks in the legal profession more efficient. This is especially true with the use of GAI and AI in connection with large discovery productions and Electronically Stored Information ("ESI"), document drafting, and research. A lawyer's fees must always be reasonable. Lawyers may not charge clients for the time spent learning the GAI or AI program. Lawyers must assess whether they may ethically charge a client for use of a proprietary GAI platform and if so, must never charge more than the direct costs associated with its use for the client.18
It seems obvious, but lawyers may not bill for the time saved by using AI, instead only billing for actual time spent. For example, if a lawyer utilizes AI to draft a Petition, a task that would normally take 30 minutes, and it only takes five minutes, the lawyer may only bill for the five minutes it took. A lawyer can and should also bill for the time the lawyer spends reviewing the draft for accuracy and completeness before filing it. The reasonableness inquiry also arises in the context of flat fee or contingency fee case where lawyers are using AI to do the same work more quickly. Thus, a lawyer must assess whether the same fee or percentage is reasonable under Rule 1.5 when the use of GAI reduces the time spent.19
4. Do Not Over Rely on GAI
A lawyer's professional judgment cannot be delegated to AI. At all times, the lawyer is responsible for exercising appropriate professional judgment and complying with all ethical rules and laws. A lawyer should supplement any GAI performed task with human analysis, critically applied, and a careful check of all authorities.20 This is important not only because of a lawyer's obligation to ensure competence and diligence in legal work performed, but also the duty to present on meritorious claims and contentions to the court and an obligation of candor.
5. Ensure Appropriate Supervision Regarding the Use of GAI
Lawyers are also ethically obligated to supervise GAI just as they would a person. This means that they are responsible for the GAI and must critically evaluate the appropriate use and whether it complies with the Rules of Professional Conduct, just like it would a human.
Likewise, perhaps obviously, supervisory lawyers have an obligation to ensure that junior attorneys and all support staff are using GAI appropriately and professionally. This rule applies both inside the firm and also to the third party vendor who is using the GAI platform.21 It is recommended that lawyers and firms develop a GAI use policy and offer trainings regarding the ethical aspects and pitfalls of its use internally, along with strong vendor policies. The NJ Opinion has a sample GAI Policy that may be used as a starting point for developing your own firm policy; however, like all things GAI, it should be reviewed for appropriateness for your individual office's circumstances and needs.22
Lawyers should also be careful not to delegate any ask that constitutes the practice of law to GAI. This is particularly true when lawyers use GAI chatbot on their websites for client intake. In order to not run afoul, a lawyer using GAI chatbots must ensure that the bots identify themselves as a non-lawyer, limit questions to those regarding factual information only, and not offer any legal advice. A chatbot may not engage in a conversation with a represented party. The chatbot's protocol should include appro- priate screening questions. The chatbots further should be trained to refer all legal or substantive questions to an actual lawyer.23 Finally, lawyers should ensure that any GAI chatbots do not violate any Rule of Professional Conduct relative to advertising.
6. Ensure No Conflicts
There is a growing concern that firm proprietary GAI platforms may violate the Rules of Professional Conduct relating to conflicts. This is because GAI products may not be equipped to handle ethical walls put in place to resolve current or former client conflicts. As a result, GAI output may include information learned relating to a current or former client in violation of Rules 1.7 and 1.9.24
Lawyers must ensure that any GAI platform being used has appropriate confidentiality and security protections and accounts for conflict issues.
Conclusion
While the use of GAI may seem scary and not without risk, it is the way of the future. "'In a few years, it will be almost malpractice' for lawyers not to use AI."25 Thus, a lawyer should educate them- |selves on how to use it professionally and ethically.
Footnotes
1 For example, studies show that the negative environmental impacts of AI are significant. AI-related infrastructure consumes six-times more water than the country of Denmark. Moreover, a search using Chat GPT, a popular, free, GAI tool, uses ten times the electricity of google search. See, United Nations Environment Programme report, "Navigating New Horizons: A global foresight report on planetary health and human wellbeing," published July 15, 2024. There is also a concern that AI models have biases that lead to favoring groups or perspectives over others, for example, manifesting in predictive algorithms for risk assessment in criminal justice systems, hiring practices, and facial recognition. See, Joint Formal Opinion 2024-200, "Ethical Issues Regarding the Use of Artificial Intelligence" issued by the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility and Philadelphia Bar Association Professional Guidance Committee, (last visited January 28, 2025)("Pennsylvania Joint Opinion").
2 These orders can range from outright prohibitions of the use of AI to certifications and disclosures. See, e.g., 2023 Standing Order issued by the United States Bankruptcy Court for the Western District of Oklahoma (required disclosure and certification); Standing Order issued by Judge Coleman in the United States District Court for the Northern District of Illinois (prohibition). See, ABA Business Law Today, "Common Issues That Arise in AI Sanction Jurisprudence and How the Federal Judiciary Has Responded to Prevent Them," September 2024.
3 Florida Bar Ethics Opinion, Opinion 24-1, issued January 19, 2024, ("Florida Opinion").
4 The State Bar of California, Standing Committee on Professional Responsibility and Conduct, "Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law," (last visited January 29, 2025) ("California Guidance").
5 State Bar of Michigan, "Ethical Duty to Maintain Technological Competence Including Artificial Intelligence," Opinion No. JI-155, issued October 27, 2023, (last visited January 28, 2025).
6 Pennsylvania Joint Opinion.
7 New York State Bar Association Task Force on Artificial Intelligence, 2024 Report and Recommendations.
8 New Jersey Task Force on Artificial Intelligence (AI) and the Law: Report, Requests, Recommendations, and Findings, issued May 2024, ("New Jersey Report").
9 American Bar Association, Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, "Generative Artificial Intelligence Tools, issued July 29, 2024, ("ABA Opinion").
10 Bloomberg Law, "In Focus: Artificial Intelligence (AI)."
11 Model Rule of Professional Conduct 1.1; Comment 8 (... "lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology...").
12 IBM, What is generative AI, April 20, 2023, (last visited January 29, 2025).
13 New Jersey Report, Finding Nos. 2-3.
14 California Guidance at p. 2. There could be exceptions to this based on the nature of the GAI product and who has access to the output, which of course, the cardinal rule is to know your GAI.
15 ABA Journal of Legal Technology, "Will generative AI ever fix its hallucination problem?," John Roemer, October 1, 2024, (last accessed January 15, 2025)."
16 Kohls v. Ellison, 24-CV-3754 (LMP/DLM), 2025 WL 66514, at *4 (D. Minn. Jan. 10, 2025)(a case involving an Attorney General's reliance on an affidavit containing false, AI generated content), citing Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023) (sanctioning attorney for including fake, AI-generated legal citations in a filing); Park v. Kim, 91 F.4th 610, 614-16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations).
17 While "[i]t is not possible to catalogue every situation in which lawyers must inform clients about their use of GAI," one example would be the need to disclose and obtain consent from a client prior to the imputation of confidential client information into a GAI tool. See, ABA Opinion at pp. 8-9.
18 See, ABA Opinion at pp. 12-14.
19 See, ABA Opinion at pp. 12.
20 All of the ethics opinions and guidance discussed herein stress this point.
21 See generally, Florida Opinion; CA Opinion at p. 2, ABA Opinion at pp. 10-11. This is a topic discussed in all guidance. 22 New Jersey Report at Appendix 2. The NJ Report also includes a helpful outline to guide discussions with AI Vendors. See, id at Appendix 3.
23 See, e.g., Florida Opinion. The ABA has also promulgated an opinion about what tasks may be delegated to a non-lawyer assistance in the context of client intake which offers additional guidance. See, ABA Formal Ethics Opinion 506, (June 7, 2023).
24 Pennsylvania Joint Report at p. 10.
25 ABA Journal of Legal Technology, "Will generative AI ever fix its hallucination problem?," John Roemer, October 1, 2024, (last accessed January 15, 2025)."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.