ARTICLE
21 November 2025

Hype Responsibly: Legally Promoting Your AI

KG
K&L Gates LLP

Contributor

At K&L Gates, we foster an inclusive and collaborative environment across our fully integrated global platform that enables us to diligently combine the knowledge and expertise of our lawyers and policy professionals to create teams that provide exceptional client solutions. With offices worldwide, we represent leading global corporations in every major industry, capital markets participants, and ambitious middle-market and emerging growth companies. Our lawyers also serve public sector entities, educational institutions, philanthropic organizations, and individuals. We are leaders in legal issues related to industries critical to the economies of both the developed and developing worlds—including technology, manufacturing, financial services, healthcare, energy, and more.
Regardless of the cutting-edge nature of artificial intelligence (AI), its recent popularity has led to an age-old problem of how to legally market products featuring it.
United States Technology
K&L Gates LLP are most popular:
  • within Immigration and Transport topic(s)

What You Need To Know In A Minute Or Less

Regardless of the cutting-edge nature of artificial intelligence (AI), its recent popularity has led to an age-old problem of how to legally market products featuring it. Deceptive claims regarding AI have been dubbed "AI washing" and can invite both enforcement and civil lawsuits.

In a minute or less, here is what you need to know.

1. Government Watchdogs Are Already on the Prowl

The Federal Trade Commission (FTC) has led the charge among enforcers rooting out AI washing. In September 2024, the FTC announced Operation AI Comply, a law enforcement crackdown on actors relying "on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers."1 The FTC's announcement included details of actions it had taken against companies that, according to the FTC, had "seized on the hype surrounding AI" and were "using it to lure customers into bogus schemes."2

One such actor was DoNotPay, which made the lofty claim that it offered, for US$49.99 a month, a subscription service to "the world's first robot lawyer."3 According to the FTC, DoNotPay claimed that its "AI lawyer" could perform "legal services such as drafting 'ironclad' demand letters, contracts, complaints for small claims court, challenging speeding tickets, and appealing parking tickets."4 The FTC disagreed.

The FTC's investigation uncovered that DoNotPay had not, among other things, trained its AI "on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns," and had neither itself tested nor employed attorneys to test "the quality and accuracy of the legal documents and advice generated" by its product.5 FTC and DoNotPay ultimately entered into a consent agreement requiring a monetary payment as well as notice to its customers that, among other things, it "did not have sufficient proof of our claims that DoNotPay operates like a human lawyer when itgenerates demand letters and initiates cases in small claims court" and it had "stopped making these claims and will not make them in the future unless we have adequate proof."6

Despite signals of a deregulatory environment under the current administration, enforcement has not slowed on the AI-washing front.

In April 2025, the SEC and DOJ jointly announced civil and criminal securities and fraud charges against Albert Saniger, founder and former CEO of mobile shopping application Nate.7 Saniger allegedly induced over US$40 million in investments with promising claims about Nate's use of AI to autonomously process online purchases when, in reality, Nate "relied heavily on teams of human workers—primarily located overseas—to manually process transactions, mimicking what users believed was being done by automation."8

In August 2025, the FTC issued an order prohibiting Workado from making misleading statements about its "product's effectiveness at detecting content generated or altered by" AI.9 Workado sold a subscription service that purported to use AI to determine whether written content, including marketing content, is AI-generated. The FTC also ordered Workado to, among other things, notify its customers of the FTC's position that Workado lacked proof for its claims about the accuracy rate of its product and specifically that Workado would not in the future "make claims about the accuracy of our AI content detection tools unless we can prove them."10

2. Private Actors Are Also Jumping into the Fray

Though at the moment less common and less developed than those in the enforcement sphere, suits by private actors also have been filed over alleged AI washing.

For instance, investors have sued over public statements about the development of AI-related product features. In September 2024, software company GitLab's investors sued regarding alleged misrepresentations about the incorporation of AI in the company's products.11 These types of securities lawsuits are on the rise. A clearinghouse run by Stanford Law School tracking securities litigation logged seven AI-related cases in 2023, 15 in 2024, and already 12 just halfway through 2025.12

Consumers have also jumped into the fray. For example, in March 2025 a consumer class complaint was filed against Apple regarding alleged misrepresentations about Siri's AI capabilities.13

Lessons Learned—Thus Far

As AI becomes more common, so too will lawsuits challenging claims made about its use. Companies would do well to put practices in place to minimize the risk of that litigation and set themselves up to best defend against it. The following are some early lessons gleaned from litigation and enforcement activity:

The Fundamentals Still Apply

AI may be new and trendy, but tests to measure whether marketing is legal remain the same. Current practices to prevent and defend against claims of misrepresentation still apply.

Know Your AI

Even if the legal tests remain the same, AI is complex and evolving at lightning speed. Invest the time and effort to understand the capabilities, limitations, and use cases for your AI. Document these efforts carefully so that any claims made about your AI are supported.

Align Your Legal and Marketing Teams

Ensure any AI-related claims proposed by technical or marketing teams are vetted by legal teams with sufficient technical competency to understand the claims made and how they compare with the actual capabilities of your product.

Distinguish Present and Future

Limit specific claims to the now existing, not aspirational, capabilities and use cases of your AI. Clearly delineate aspirational claims as forward-looking statements expressing hopes of how your AI might perform in the future.

Monitor Claims Made

Establish an audit system whereby marketing claims, both existing and aspirational, are regularly evaluated and updated for accuracy. This is especially important given the breakneck pace at which AI is developing, where even accurate claims can quickly become outdated and potentially misleading.

As companies rush towards the shining light that is AI, their statements are going to be scrutinized by both regulators and class action plaintiffs. Our attorneys can assist in assessing and mitigating these litigation and enforcement risks. Now is the time to prepare.

To view citations, please see the publication page on our website.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More