On this episode of There Has to Be a Better Way?, Shannon Kirk returns to the podcast to share "Better Ways" from multiple perspectives: As a thought leader on the application of machine learning and artificial intelligence in litigation and internal investigations, she highlights the importance of metrics as evidence of effectiveness; and as an award-winning author and lawyer, she explains why words and precision matter.

Transcript:

Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, where we ask, "There has to be a better way, right?" There just has to be. I'm Zach Coseglia, the co-founder of R&G Insights Lab, and I'm joined, as always, by my friend and colleague, Hui Chen. Hi, Hui.

Hui Chen: Hi, Zach. Hi, everyone. Good to be back.

Zach Coseglia: Indeed. Hui, we have another repeat guest here with us today. Who do we have? We're very excited.

Hui Chen: I am very excited. Shannon Kirk, one of our most popular guests, is back with us to talk about something exciting. And also, we're going to learn something interesting about her.

Zach Coseglia: We are indeed. Shannon, welcome.

Shannon Kirk: I'm very happy to be back. Good to see you both.

Zach Coseglia: Hui is not lying. I think that your episode might be one of the most popular, if not the most listened to, so, congratulations on that.

Shannon Kirk: Thank you. It's because I had great, great interrogators.

Zach Coseglia: Well, let's see if we can have lightning strike twice. This episode is about recent AI (artificial intelligence) regulation. It's actually really well timed, because a few weeks ago, we had another podcast about artificial intelligence—but not so much from the legal angle—from the perspective of one of the world's leading AI ethicists, Rumman Chowdhury. Today, we want to talk more broadly about how AI is being treated in the courts, and also about some of the ways in which AI is just being used in the legal profession. With all of that said, let's just talk a little bit more about you and your perspective on this topic. First of all, tell us about your role here at Ropes & Gray in a little bit more depth, and how AI strategy and regulation is part of your practice.

Shannon Kirk: First of all, thanks again for having me back on. This is an exciting time in my career. After many years of leading the e-discovery practice here at Ropes & Gray—prior to that I was a trial attorney—along comes GenAI. For the last several years, my team has mastered what is called "machine learning" (in terms of the application to litigation and investigation). So, how are we going to use machine learning and tools around that to be more effective, efficient and time sensitive when we are doing fast-paced investigations, and also to master large quantities of data for productions? I co-wrote the federal judges' guide on the use of machine learning, TAR, or continuous active learning, and spoke on this topic for many years. There's even a federal court that quoted one of the things I said in a case. So, because of that, and that experience that I and my team have had, we have now turned and changed our group to the "Advanced E-Discovery and Generative A.I. Strategy" group. And why is that? Our group and folks like me have already cut our teeth on a number of really important aspects that will apply to the application of GenAI in litigation. We have already made mistakes and gotten better and learned how to do the following in a more effective way for our clients.

One is thought leadership out there in the world. How are we engaging with industry groups around machine learning and creating principles, etc.? You better believe that we're going to use that learning to the application of those industry groups and thought leaders and creating principles in GenAI. It's already happening in advocacy in courtrooms and making sure that our judges are engaged and educated on these technological issues much more quickly than we were with machine learning, and not allowing the commentators out there or the vendor space out there to overcomplicate things, and to use what I like to call probably an artfully culpish language that excludes people and intimidates. We want to make sure that we are, from the jump, using precise language that is clear for everybody to understand, because then we get better cases.

Zach Coseglia: Shannon, you talked about your team's years-long experience with machine learning, and now, the recent transformation and focus of your team relating to generative AI. In our previous podcast with Dr. Chowdhury, we defined those terms, so rather than getting into a definition of those terms, what I'd love to hear from you about is how your team was applying machine learning in the legal profession, and how your team is now envisioning generative AI to become a tool, or how it is a tool in the legal profession.

Shannon Kirk: We have mastered the use of machine learning in litigation, in terms of productions to opposing parties, subpoena compliance to governmental bodies, attorneys general, etc., and internal investigations in the following way. Traditionally, what we would do to comply with any of those—to produce documents that were requested—we would collect a bunch of emails, office documents and chats and we would run search terms. And you would review those search hits really in no particular order—you would batch them out to typically contract attorneys, junior associates or both, and get through it all and review it. It's incredibly inefficient. If you study the rate of responsiveness in doing a search term-based review—and we have studied it—you will see a flat line of responsiveness. We have one study where it showed a 9% responsiveness rate every single day, or something like that, on average. So, what does that mean? It means that you've got to keep reviewing all the search term hits. It doesn't really make for anything going faster. It doesn't mean you're going to find key documents faster or anything more responsive faster. It could be that you find very relevant stuff on the last day of review, after nine months of review. Clearly not an effective use of anybody's time. By and large, search terms are only so good as you construct them.

Along comes machine learning, and we can do a few things with that. One way is through predictive coding, so to speak. The more current version that we are using, is continuous active learning (the most effective), and that's where humans teach the computers, so to speak. I, as the human, say, "I like this document: responsive. I don't like this document: not responsive." It's just like Pandora: "I like this song. I don't like this song." So, you are teaching the machine—the machine is learning from you what you like. It will then deliver to you the more responsive information. Now, when you graph it out—and we have—the rate of responsiveness is at a nice ski slope, it goes gradually down, which means you're seeing more of the responsive stuff at the start, and then you can make a decision on, "You know what? Not really seeing truly unique information here at a certain point." And then, you cut off the review—get it done faster, it's more substantively effective, and timelines are better. That also impacts budgeting, because you're going faster, and it also impacts staffing. All of this is going to be relevant when we start talking about GenAI.

When we shifted to being more inclined to use machine learning, there's also concept analytics where you can take a large corpus of documents and categorize them into buckets, and then more quickly go to the buckets that look like they might have more relevant information. You use both of them together and you can really fast-track a review. Again, you change staffing, you change budgeting, you're going faster, but you're also being smart about where you're going to put your star players—the people who might be billing more than, say, a contract attorney, or who are really, deeply embedded with the substance of the case. "I'm going to staff them on looking at the conceptual clusters, or on training that continuous active learning model on what I like." So, what this does then is, compared to that traditional model where you would have armies of lawyers reviewing multitudes of data for months on end (sometimes, years on end) to get through it, you now shift to a staffing model with machine learning where you have less people doing it, because it's less documents, less time. But you also have a division of labor that is fitting to the technology, and so, maybe you have some higher billable people, but they're doing some really concentrated work. And they're also working with folks like me, checking the metrics every day, checking the health of the model, so that when I get called into court to say, "Yes, this was a defensible process," I've got the receipts to show it because we've been tracking those metrics and the health all along.

Zach Coseglia: Hui, one of the things that I love about what Shannon's sharing is not only the obvious Better Ways in the ways in which she's using technology and innovation to make the investigative, litigation or subpoena response process better and more effective but the application of one of our most common Better Ways, which is measurement. Shannon, what you just said about how you're measuring the model every day, you're looking at the process on a continuous basis, but also, you're one of the few people who I know that actually has almost clinical trial-like data around why your ways are actually better than the old ways or the traditional ways, and I have so much respect for that.

Hui Chen: This reminds me of one of the times when I was in DOJ, and I would say the single best compliance presentation I had seen—and I think the prosecutors would agree with me—was when a compliance officer came in, and for every adjective he used he had quantitative study to back it up. And that's what you just did, Shannon—every time you said something, you said, "We have studied this. We have seen the numbers. We have charted this out." That mindset is really something we'd like to see more of.

Shannon Kirk: It's really important, and it's going to play into my answers on GenAI, too. I'll give you a little personal anecdote. My mom is a very New Hampshire woman, and we are very close, but we are total opposites—she is a crunchy granola kind of woman, and I love fashion and consider Saks my personal heaven, so we are quite a pair together. She drilled into me that she hates phonies—this whole "fake it till you make it" or over-bloated marketing, or anything that's just not real and true. And that is really important to me. So, when we started using machine learning and when we started saying, "No, this is a Better Way," we were not going to put ourselves out there to clients and say, "You know what? It's just better because someone else told me or a vendor told me that." We're going to have the receipt. We're going to have the metrics. We're going to be able to cite, chapter and verse, why mathematically, statistically, financially, this is better and it's more defensible. Not only is it faster, but we actually found more items of use to you, client, in a useful way. And so, we continue to track this.

It's important right now, where we are at this juncture in the law in talking about GenAI, where there's a lot of folks out there who are, in my view, getting us a little bit ahead of where we actually are with the technology and using overgeneralized terms to suggest that they are using "AI." I would challenge those folks on how many metrics they have to demonstrate how defensible it is, and how much better it is in comparison to where we are with machine learning right now. I'm not going to sit here and say, "GenAI could never replace how great machine learning is." It will replace it, we will have that evolution, but it is not the case that we can demonstrate yet to a court, to a compliance officer, to a governmental body or to a plaintiff's counsel that yes, we are now using GenAI in document review the way that I just explained to you. I can tell you where I think we're going to go and how we're going to apply the learnings that I just explained to GenAI—that seems very clear to me. It's just that we need to be careful right now and not get ahead of ourselves, because what will happen? The same thing that happened with machine learning. We will not get good ESI protocols. We will make mistakes on principles. We will make mistakes on case law. And frankly, I just think that having cut our teeth on machine learning for many years, we're better equipped to avoid much of that now.

Zach Coseglia: One of the things that I found really interesting about the development in the space over, honestly, the past six to nine months, is that there are folks like you who have been living in this space for years, but then all of a sudden, it was sometime this year, the terms "generative AI" and "artificial intelligence" became common vernacular from a pop culture perspective, but also in the legal profession. Folks who would never have uttered these words before started using them as though they were experts. I don't really know where it came from—I suppose ChatGPT probably played a pretty big role in that. But it was almost as though the legal profession awoke from some sort of fever dream, and all of the sudden, AI was on everyone's minds and on everyone's tongue, and not necessarily in the most accurate ways.

I actually want to talk a little bit more about you as a person, and not just as a lawyer, Shannon. I think most folks know this about you, but not everyone knows this about you—you are, in addition to being a world-class lawyer, you are an award-winning and best-selling author. Words matter to you, as a lawyer and as an author. Tell us a little bit more about your perspective on this.

Shannon Kirk: Yes, words really matter, especially in the law and in writing. Every single day, whatever hat I'm wearing, if I'm writing, words matter—if I'm doing the law, words matter. In writing, for example, every single word you choose in a novel could change or shift a scene. One example I like to use is if you are writing a very intimate, romantic scene, it's just two people alone, and it's intended to be an emotional scene, if you are trying to describe the sounds of everything going on around you and the senses—because part of writing is hitting all the senses—what are they seeing? What are they hearing? What are they tasting? What are they feeling? If you inadvertently use the word "beep" anywhere in that chapter or scene, guess what? For me, it's thrown. Maybe you can recover it somehow. Maybe it's a pair of robots having a romantic scene. I don't know. Context matters. But in a typical human-human relationship, in a cabin in the woods and having a romantic scene, you better not use any technology words. That's my view. But we've got to be precise, and we've got to be careful with our words.

I love quoting Margaret Atwood, because not only is she a major, awesome author, but one of her best quotes—I have a sign in my house—is "A word after a word after a word is power." Power comes from words. How many dictators, world leaders, good or bad things have happened out of speeches by orators? It comes down to words that they are using, and it is really, incredibly important that we acknowledge that. It is incredibly important when we're talking about the law. Look at any trial attorney—look at how they convince a jury or a judge. They don't just get up there and just start talking—it is theater. You've got to bring that theater into your every single day and use precision when you are advising clients, especially when we're talking about technology. It is not okay, I don't think, for the modern litigator to treat technological terms as something that they don't want to deal with, and they don't want to be precise on. A lot of times, I'll jump into a call, and I will hear things like, "That's just all that technological stuff and they'll deal with that." Well, you know what? It matters. It could mean a few million dollars to your case if you don't hone in on and understand what these words mean and the implications of the words that you use. So, yes, words really, truly matter to me as a person, and for both careers that I have: writing and in the law.

Hui Chen: I so appreciate the particular sensitivity that you bring to this as a lawyer and a writer. If we all reflect on our daily lives, we think about how many problems are caused by miscommunication—what kind of impact our words have on people's perception of things, how they think about it, how they react to you. Just even at the most fundamental levels, I find a lot of situations where I use one word, and in my mind that word means A and B, but unbeknownst to me, in the other person's mind, that word actually means A and X and Y, and now we can proceed with the conversation. I think I'm talking about A and B, they're talking about A, X, and Y, and five minutes later, we're completely not understanding each other. I think this is the kind of experience we all have at some point, so this really illustrates why words and precision in the use of words are so important. So, Shannon, the problem that we have with imprecise language is not something that we just see among common people and perhaps in regulatory proposals, but we also have seen it from the courts, is that right?

Shannon Kirk: Yes, we do have some court orders that have come out in response, I believe, to some headline-grabbing cases in which a couple of lawyers may have used a generative AI tool, and those tools may have generated fake cases that don't exist, which is obviously not a good thing. In reaction—it seems perhaps to the fear that lawyers would just start using GenAI and not having accurate cases—we got some standing orders that a few of them require certification if you are or are not using GenAI, or to disclose if you're using "AI" without any definition. So, you can see that you have a problem because the way a couple of these court orders have been written, it could encompass machine learning. As I just said, we've been using machine learning for many years and don't have a duty to disclose the use of machine learning, in most instances. Now, there is a judge out of the SDNY, Judge Subramanian, who came out and just cautioned lawyers on the use of GenAI, referring to it as "ChatGPT or other such tool," and that lawyers should confirm the accuracy of that tool—but that's an existing obligation on lawyers, right? Perhaps the judge felt it was important to come out and just remind lawyers, which is good to have the reminder, especially now that this tool is just continuing to be in use, but I don't know that we needed the other orders that have now introduced a level of confusion on what do we have to certify and when? What are we defining as "AI?" And what GenAI tools are actually being used in litigation that would concern a court? I would predict that, given that a few commentators have come out and said, "A couple of these court orders are not precise enough and are actually confusing the matter," it may be that what we will start to see is more caution from the bench to not issue the standing orders, and more of a reminder, like out of the SDNY, of our obligations as counsel to make sure that what we are submitting to the court is accurate.

Zach Coseglia: I want to now turn this a little bit from a backward-looking review to more of a forward-looking discussion. So, we've looked backwards, or we look at the present and we see a little bit of imprecision. We see growing pains associated with new technologies, new ways of doing things. They very well may be Better Ways, but they're certainly new, and maybe there's a little bit of confusion. What are we seeing companies doing in response to this interesting time of transition around the use of AI generally, not just in the legal profession, but really use for business purposes?

Shannon Kirk: Most of them are embracing it because, in various different use cases, you can see almost immediate efficiencies, cost savings, etc., which is good—we need to be innovative and allow for that. What we're seeing—and this is what I love—is somewhat of an acknowledgment that we need to break down the silos that we live in. What you need, for a holistic view and application of GenAI, is to take into consideration a multidisciplinary approach—multiple departments weighing in on how my organization is ethically and consistently going to use GenAI, and at the same time, making sure it's effective and useful and within regulatory guidelines, doesn't trip up on copyright laws, doesn't trip us up on future production needs or privilege, etc.—all of these various concerns. So, companies are saying, "You know what? Let's make sure we don't cobble innovation, because we get efficiencies and market edge if we're using technology correctly, but let's be responsible, but really quick about it." They are standing up, these teams and committees, immediately, and approaching it in a few different ways, finding some targeted use cases that they can build a model around, which is smart. "We know, for example, we want to use it in this particular use case, and that we'll cut our teeth on that and branch out from there." Others are saying, "You know what? We're going to use this one GenAI tool. We're going to test this out, and branching from that, we're going to have multiple use cases. And then, branching off of those use cases, other use cases, so it looks like the Boston subway map." They're building that hub and spoke with branches system with multiple disciplines making sure that we are considering all of the multitude of issues that can come into play with GenAI, which truly is needed, because what we don't want is somebody in accounting, then somebody in R&D, and then somebody in sales all using their own different versions of different GenAI in different ways, and using data and storing data in ways that conflict with legal obligations.

Zach Coseglia: You've got a friendly audience here in terms of the power of multidisciplinary teams, for sure. Shannon, I want us to get to know you even better and turn our attention to our standard Better Way? questionnaire, inspired by Proust, Vanity Fair, Inside the Actors Studio and Bernard Pivot. I think it's been a while since I've listed them all, given them all credit, Hui. Shannon, last time you were on the podcast we didn't do this, and so, now it's your time. Hui, what's our first question for Shannon?

Hui Chen: Shannon, first question, you get to pick one question out of two to answer. The two options are: If you could wake up tomorrow having gained any one quality or ability, what would it be? Or you could answer: Is there a quality about yourself you're currently working to improve? If so, what?

Shannon Kirk: The first one I'm going to answer, because I very much want to wake up tomorrow knowing how to play the cello. I have a cello for my 50th birthday at home that I don't know how to play, and I need time to learn how to play it.

Hui Chen: Wow, we have three musical souls here. My wish for that question was to be an opera singer.

Zach Coseglia: That's great. Alright, Shannon, second question, also you get to pick one of two. Who is your favorite mentor? Or: Who do you wish you could be mentored by?

Shannon Kirk: I guess I wish I could be mentored by Gabriel García Márquez, who has passed away. I am blown away by every single word he's ever written.

Hui Chen: Next question: What is the best job, paid or unpaid, that you've ever had?

Shannon Kirk: The best job I have ever had is being a writer. It is my sole passion in life. It is all I ever want to do. I am here doing this job, to be honest with you, so that I can be a writer.

Zach Coseglia: That's great—I love that. No hesitation on that one. Alright, the next question is: What's your favorite thing to do?

Shannon Kirk: Write. It's literally all I ever want to do.

Hui Chen: What is your favorite place?

Shannon Kirk: My favorite place is my dining room right now, because I just renovated it, and it is honestly my favorite place on the planet right now.

Zach Coseglia: What makes you proud?

Shannon Kirk: I'm proud that my son, who is a sophomore in college, is happy and smiles, calls his mother and is on the dean's list.

Zach Coseglia: Congratulations—that's great.

Hui Chen: Wow, that's awesome. We go from the profound to the mundane. What email sign off do you use most frequently?

Shannon Kirk: I guess it would be "best."

Zach Coseglia: That's a very popular answer. Alright, this is a good one for you: What trend in your field is most overrated?

Shannon Kirk: I don't know if it's overrated and I don't know if it's a trend, but it's an over-complication of machine learning. I don't think that it needs to be a complicated thing.

Hui Chen: Last question: What word would you use to describe your day so far?

Shannon Kirk: "Constant."

Zach Coseglia: Shannon, it's been so good to get to know you a little bit better and to have you back on the podcast. Looking forward to the next one. Thanks, Shannon. And thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that's happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple, Google, and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don't hesitate to reach out—we'd love to hear from you. Thanks again for listening.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.