Believe it or not, one of my (Nolan Hurlburt) first real lessons in law was because of a typo. As a newly minted lawyer, I was tasked by my then firm with some research, one of those all-consuming, time-sensitive kinds of research efforts that we are handed early in our careers. In this instance, there were a number of senior lawyers on the matter, so I toiled away, throwing everything I had into the task in hopes of impressing them. Then, as one does, I sent my email off late in the evening and proudly headed home, feeling accomplished (and awesome) that I had completed my task and contributed to real legal work.
Unfortunately for me, by the next morning, the feelings of awesomeness were erased by a reply-all response to my email, sent to the very team that I had hoped to impress, bluntly pointing out that I would not retain credibility at the Firm if my emails continued to contain typos. #Mortified.
Whatever management theory may say about the effectiveness of this approach, the lesson stuck: in law, quality matters. This lesson remains and may arguably be more critical as we think about the use of generative AI in law and the increasing prevalence of what is being dubbed "AI slop" or "workslop."
A recent study conducted by Harvard Business Review in collaboration with Standford Social Media Lab defines workslop as "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task." In other words, it describes the sometimes subtle and often cumulative errors, imprecisions and degraded quality that can arise when artificial intelligence is used by people to generate professional work product.
In simple terms, it is the type of work product that is "kind of right" and "kind of not right." Unlike hallucinations, which refer to outright fabrications or false statements (e,g., fake or incorrect case citations being the most famous in legal right now), slop encompasses a broader spectrum of issues: awkward phrasing, minor factual inaccuracies and assumptions, formatting inconsistencies and a general erosion of the standards traditionally upheld in legal practice.
The study highlights that while teams look to AI to help improve their work product, sometimes the opposite ends up being true. Quickly produced output that appears polished or "good enough" on the surface, can be "unhelpful, incomplete or missing crucial context." This is described as the "insidious effect" of workslop, which decreases the value of AI and its value proposition by:
- Creating confusion: The study describes this as the feeling of confusion that follows when a recipient starts reviewing work product that does not make sense. It leads the recipient wondering what something is and whether AI may have been used to generate it in part or in whole.
- Adding extra steps: The study calls this "workslop tax" and notes that "workslop uniquely uses machines to offload cognitive work to another human being." The recipient must now perform the work that arguably should have been performed by someone else, which leads to the recipient needing to decode, infer missed or false context or even rewrite the work product themselves (often requiring significantly more time than would have otherwise been required).
- Challenging workplace dynamics: The study importantly identifies the very human realities of this ancillary effect of technological change. Beyond the impacts of additional time spent for the recipient (rewriting, additional fact-checking, extra meetings to address gaps), many respondents in the study actually described the challenge of navigating difficult conversations with producers of workslop (particularly those that require diplomacy and the preservation of hierarchical relationships).
The study, which involved 1,150 US-based full-time employees across industries, highlights the significance of the problem, with 40% report of respondents having received workslop in the last month, and those same respondents estimating that 15% of the content they receive qualifies as such. Among participants receiving workslop, 53% report being annoyed, 38% confused and 22% offended.
But it's not all bad news. The study speaks to how legal teams can circumvent this from creeping into the culture. In particular, the study notes that this is really just an existing reality in a new time. Shortcuts and procrastination caused similar outcomes in a pre-generative-AI world. To effectively use AI, the study advocates that we:
- Recognize that generative AI is not appropriate for all tasks. In our experience with AI tools to date, we have seen success with drafting, document review and legal research. That said, these tasks all require thoughtful input (well-designed prompts) and seasoned oversight (appropriate review), anchored in carefully crafted policies guiding responsible use.
- Understand that AI can enhance creativity, but still requires work. This is something we emphasize in all our training, AI or otherwise. AI is not the "magic bullet" and effective use does not necessarily mean clicking one button and receiving instant output. It means celebrating getting to a first draft more quickly, but it will still require review, input and revision, recognizing that the tools are imperfect and can contain bias.
- Engage with AI as a thought partner and find new ways of working with colleagues. While some may view AI as a means to avoid communication, it can actually spark new conversations with colleagues, peers and leaders. In our training, we emphasize the importance of actually conversing with the chat. This is a new muscle to build and can feel strange, but the more one gets comfortable at providing prompts, including context, sharing feedback and iterating on prompts (or starting over), the better the quality of output they will get (this is in contrast to what may be seen as the "one and done" or "magic bullet" approach to prompting).
What strikes us as particularly interesting about this study, is that it highlights an important distinction between a perceived culture of perfection in law (which can inhibit progress) and a culture of quality (which can ignite progress) that must continue to be embraced by the legal profession.
While it might be easy to discount feedback received on work product (AI slop or otherwise) as an overemphasis on perfection (after all, we have all likely survived the reality of an email missing an attachment or a text message that autocorrects and has no punctuation, right?), it is better viewed as an emphasis on quality. Beyond that, in light of what the study shows, it represents an opportunity to pause and reflect on how to thoughtfully use AI to enhance our workflows and experiences with colleagues.
Finally, as a general tip, learn from Nolan's situation back in the day: try not to reply all when encountering AI slop. Instead use these as an opportunity to engage in new conversations with our colleagues on how to walk this AI journey and combat workslop together.
About Dentons
Dentons is the world's first polycentric global law firm. A top 20 firm on the Acritas 2015 Global Elite Brand Index, the Firm is committed to challenging the status quo in delivering consistent and uncompromising quality and value in new and inventive ways. Driven to provide clients a competitive edge, and connected to the communities where its clients want to do business, Dentons knows that understanding local cultures is crucial to successfully completing a deal, resolving a dispute or solving a business challenge. Now the world's largest law firm, Dentons' global team builds agile, tailored solutions to meet the local, national and global needs of private and public clients of any size in more than 125 locations serving 50-plus countries. www.dentons.com
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances. Specific Questions relating to this article should be addressed directly to the author.