ARTICLE
27 January 2025

Takeaways From The Inaugural Sedona Conference Working Group 13 On AI And The Law Annual Meeting

ND
ND Galli Law

Contributor

Founded in 2015 by Nicole Galli, this boutique law firm specializes in intellectual property (IP) and commercial litigation, with offices in Philadelphia, New York, Boston, and Washington, D.C. Certified as a Women’s Business Enterprise (WBE), the firm focuses on providing high-quality, pragmatic legal services to innovators and entrepreneurs. Known for tackling complex legal issues, the firm works at the forefront of legal challenges, shaping the future of law. With a strong commitment to diversity, equity, and inclusion, it supports businesses owned by historically underrepresented groups. The firm has earned recognition in the *Philadelphia100* and *Best Law Firms* by U.S. News & World Report for its excellence and rapid growth.

As much as some of us may want to ignore it, it has become clear that the speed at which AI is being adopted and implemented into almost every aspect of our daily lives...
United States Technology

As much as some of us may want to ignore it, it has become clear that the speed at which AI is being adopted and implemented into almost every aspect of our daily lives makes it impossible to discount any longer. Thus, the question is not whether we deal with it, but how. I recently attended the Sedona Conference Working Group 13 on Artificial Intelligence (AI) and the Law Inaugural Annual Meeting on January 15-17, where this and many other pertinent questions were tackled. The importance of and interest in this topic is underscored by the record-breaking attendance at the meeting which had to be capped at 120 attendees (instead of the typical 60-80).

As several speakers noted, AI technology is not new, but we are clearly now embarking on a new AI age. From the latest version of ChatGPT, to AI-generated summaries of Amazon product reviews, it's everywhere. Regardless of whether you personally use AI or not, it's difficult not to encounter it in some shape or form in our daily lives. Businesses, across industries, have already started to adopt its use internally, primarily for its ability to process vast amounts of data in a short amount of time and test various uses of this capability as a means to increase productivity and efficiency. In the legal field, early adoption is most prevalent in the area of e-discovery and, particularly, document review tools. Any litigator will tell you it's not uncommon to see, in most commercial litigation cases, tens if not hundreds of thousands of documents to be reviewed and/or produced in e-discovery. Depending on the size of the litigation, it can take teams of contract reviewers and hundreds of hours of attorney time to review and code a document production. That process is likely to change dramatically very soon. As just one example, ordinarily before starting a large document review, you need to teach the reviewing attorneys what to look for in terms of "hot" documents. But what if there was an AI tool that you could similarly teach what to look for in terms of "hot" docs, and it could give you an answer in a matter of minutes rather than days or weeks? This and other capabilities are already being utilized and as the tools continue to be refined and more practitioners adopt them, it seems likely that the sheer processing power of AI tools put to use in this context could lead to a substantial increase in efficiencies and make large scale document reviews a thing of the past.

On the flip side, however, there are also significant downsides to AI use that have also emerged. The issue of data bias, as one example, was discussed in a number of panels. Many speakers readily admitted that it is not possible to eliminate bias entirely in a given data sample, so the issue becomes how to set a standard on what should be considered a legally permissible versus impermissible level of bias. Moreover, while the near universal accessibility of the technology is laudable, it also means it is going to be not only used in beneficial and positive ways, but also for inappropriate and even nefarious reasons. We have already seen lawyers sanctioned for submitting briefs wholly drafted by AI, and unchecked by the signing attorneys, containing citations to "hallucinated" case law, i.e., case law that does not, in fact, exist. Worse, a panel of state and federal judges at the Sedona Conference noted the emergence of AI generated "evidence" particularly in family law matters including custody disputes. It goes without saying that this type of (mis)use has the potential to have disastrous consequences.

Increasing the complexity of an already complex issue is the march towards the next generation of AI, or "agentic AI." These are rapidly developing autonomous systems that move beyond the current single task-oriented systems into "Large Action Models" capable of independent, serial decision making to achieve a desired objective. The most pressing questions and concerns surrounding agentic AI systems are thoughtfully addressed in a recently published article by Tara S. Emory and Maura R. Grossman, two members of the Sedona Conference AI Working Group. As detailed in the article, some of the most concerning characteristics of these agentic AI systems include not only the snowball effect that occurs once a bad decision is made and continually built upon, but also their ability to make independent decisions (including lying) that conflict with established norms and ethics in order to achieve an objective. Examples given by the authors include an AI agent tasked with winning a boat racing video game deciding to repeatedly crash into point targets, rather than race, to outscore human players and a report that Chat GPT-4 tricked a human into solving a CAPTCHA for it.

So, what do we do with a technology with seemingly endless promise yet so ripe for abuse and misuse? First, continued education is key. For legal professionals, as more and more clients use and adopt it, you have to know what "it" is before you can competently advise on how it can or should be used, and what the potential risks are and how to best mitigate those risks. For clients, the same rule applies. Before you adopt and implement a new technology into your business, you should know what it is you are actually adopting and think through what the potential downstream effects might be from a particular implementation. Second, and one of the most important topics discussed at the Sedona Conference meeting, was whether this is a technology that might require additional regulation and/or legislation, or if existing legislation and legal frameworks already in place are sufficient. For example, when AI is used to make employment decisions that result in discriminatory practices, do we really need (or want) new legislation specific to AI or are current employment laws dealing with employment discrimination sufficient to combat the issue?

In other areas, such as intellectual property protection including litigation, this issue is less clear. The USPTO, for example, issued guidance on both inventorship and subject matter eligibility with respect to the use of AI in patentable inventions suggesting that there may be gaps in the current law. On the enforcement side, copyright holders appear to be the first (and largest) group of potential plaintiffs to challenge AI and the use of copyrighted material to "train" large scale AI models such as ChatGPT.

The Sedona Conference AI Working Group will be tackling these issues and more as the group works through the next steps and decides which areas to focus on first. One thing is certain, AI is a game-changer. Where it is used, and how it is used will likely have significant impacts on society as we know it. Thoughtful consideration is required to navigate the emergence of this technology and balance providing reasonable guidance for its use and implementation without unnecessarily hampering further and continued innovation. Working Group 13 is poised to do just that, and I'm excited to contribute to such an important endeavor.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More