The Federal Trade Commission (FTC) announced a "sweep" targeting AI-related conduct this week. The cases provide insight into how the agency may approach AI-related issues going forward and illustrate differences among the agency's commissioners in how to approach issues raised by AI.
Three of the cases involved marketers making false earnings and business opportunity claims promising buyers income from AI-generated ecommerce locations. The FTC's approach here was straightforward and consistent with how it has approached other money-making claims. Not surprisingly, both cases were voted out 5-0, and the FTC has obtained asset freezes against the companies and some principals.
The other cases were more novel and highlighted some of the challenges raised by AI.
In the case against DoNotPay, the FTC alleged that the company's "robot lawyer" could replace legal services. The FTC challenged false and unsubstantiated claims that the AI tool operated like a human lawyer in generating demand letters, initiating small claims proceedings, and performing compliance work on business websites.
The FTC challenged as false claims that the AI tool could analyze a small businesses website for hundreds of compliance issues based solely on an email address. The agency also challenged as false claims that membership in the company's program included features to protect a copyright and generate a customized cease and desist letter for a defamation claim, a non-compete agreement, and a residential lease.
The complaint referenced actions by the California Bar directing DoNotPay to cease and desist many of its actions, which the company allegedly falsely told the Bar Counsel it would cease. These actions appear to be the basis for a statement from Commissioner Andrew Ferguson that it is not the FTC's job to police state bar rules, nor the FTC's job to protect the legal industry from competition. The case settled, with $193,000 being refunded to consumers and injunctive relief aimed at stopping or correcting false statement made in the past.
The case against Rytr involves the convergence of two current hot topics: reviews and AI. One function of a "writing assistant" sold by Rytr involved generating testimonials and reviews. The FTC alleged that the AI program generated reviews that often bore no relation to the user's input or experience, and that such reviews could deceive other consumers reading those reviews. The agency challenged such conduct as unfair under Section 5 and as providing the means and instrumentalities for others to make deceptive statements.
In her dissent, Commissioner Melissa Holyoak (joined by Ferguson) pointed out a rather large hole in the FTC's case. There were no allegations that any such reviews were ever actually posted or relied upon by a single consumer. Holyoak argued that the potential harm was speculative and failed to satisfy the statutory requirement that to be unfair, conduct must cause or be likely to cause substantial injury to consumers. Holyoak also criticized the FTC's failure to consider the countervailing benefits that the review generation tool provided by creating first drafts the user could then tailor to their experience.
Ferguson criticized the FTC's use of its "means and instrumentalities" tool, pointing out that Rytr itself had not made any false statements, and there was no evidence that anyone had made deceptive statements to consumers.
Both Ferguson and Holyoak expressed concern that the FTC's aggressive interpretation of its enforcement authority could chill legitimate uses of AI technology.
The sweep marked the beginning of the FTC's efforts to deal with AI. How the agency balances the approach taken by the majority in Rytr and the concerns expressed by the Republican commissioners remains to be seen and is likely to be influenced by what happens on November 5.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.