The Federal Trade Commission (FTC) recently initiated a law enforcement sweep, Operation AI Comply, against multiple companies that the FTC alleges have "relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers."
The sweep targets a range of AI related services and schemes, from "the world's first robot lawyer" [which my colleague, Kristen Niven blogged about HERE and HERE], to false earnings claims, to using AI to generate fake reviews. According to FTC Chair Khan, "Using AI tools to trick, mislead, or defraud people is illegal... [and] there is no AI exemption from the laws on the books."
As AI becomes more ubiquitous, so too have the promises that it can revolutionize businesses and improve lives. Unfortunately, as the FTC's sweep reveals, many companies can't back up what they've promised to consumers.
Among the companies included in the sweep:
DoNotPay: Promoted as "the world's first robot lawyer," this service promised to help consumers file lawsuits or generate legal documents, but the FTC alleges that the company's AI lacked the expertise of a human lawyer, leaving customers with inaccurate legal advice. The FTC announced that "DoNotPay has agreed to a proposed Commission order settling the charges against it. The settlement would require it to pay $193,000, provide a notice to consumers who subscribed to the service between 2021 and 2023 warning them about the limitations of law-related features on the service. The proposed order also will prohibit the company from making claims about its ability to substitute for any professional service without evidence to back it up."
Rytr: This case involved an AI service that could quickly generate an unlimited number of detailed and genuine-sounding reviews with minimal user input, thus generating reviews that bore no relation to a real consumer's experience, and could deceive readers relying on the reviews for actual consumer experience and information. The proposed order settling the Commission's complaint is, according to the FTC, "designed to prevent Rytr from engaging in similar illegal conduct in the future [and] would bar the company from advertising, promoting, marketing, or selling any service dedicated to – or promoted as – generating consumer reviews or testimonials."
Ascend Ecom: The FTC also brought suit against an online business opportunity scheme that allegedly claimed it could make consumers rich "using proprietary software and artificial intelligence." Consumers allegedly paid the scheme operators between $30,000 and $80,000 to start a shop on Amazon, Walmart and other ecommerce platforms, as well as thousands more in store inventory, with the promise they would soon earn thousands of dollars a month in passive income from the technology-driven store. Instead, the FTC alleges, "the promised gains never materialize, and consumers are left with depleted bank accounts and hefty credit card bills."
The Operation AI Comply cases highlight the Commission's growing scrutiny of AI-related claims and services. As FTC Commissioner Holyoak said recently at the National Advertising Division Conference: "As the nation's consumer protection and competition agency, the Commission is—and should be— at the forefront of addressing AI-related harms, whether they manifest as antitrust violations, deception, fraud, or some other form of substantial injury to consumers that the Commission has legal authority to address." We should therefore expect to see more activity in this space from the FTC as the technology proliferates.
{ "By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected." - FTC Chair Khan
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.