- within Corporate/Commercial Law topic(s)
- in United States
- with readers working within the Retail & Leisure industries
Letter from the Editor
Dear Readers,
They say February is the shortest month, and every year I believe it, until I sat down to write this month's ICYMI and realize just how much happened in those twenty-eight days. Regulators, legislators, and courts have a way of making the calendar irrelevant.
As a kid, commercials were always my favorite part of the Super Bowl. While everyone else watched the plays, I waited for the ad breaks. Some things never change, though now I watch with a legal lens. This year, roughly a quarter of Super Bowl LX commercials featured AI in some form, with almost no on-screen disclosure. Equal parts fascinating and alarming, at least for me. It made me think that writing about it may help many of you who work on advertising issues every day. The short version of my feature article is a reminder that regulators are watching all year and not just during the Super Bowl. One key takeaway is that companies should be transparent about AI use in ads because the "we disclosed it in the fine print" won't work for regulators or, as it seems, your customers, who seem even more alarmed by AI generated content in advertising.
Beyond the Super Bowl, this issue covers a lot of ground. On AI, Congress is moving on multiple fronts, including a bill that would require retroactive training data disclosures for every generative AI model already on the market. Courts are also answering questions practitioners have long debated, including a Southern District of New York ruling that AI-generated documents shared with counsel are not privileged. Privacy had a busy month too, with the FTC targeting data brokers over military personnel data, Connecticut releasing its first CTDPA enforcement report, and the Supreme Court granting cert in a case that could reshape the Video Privacy Protection Act. The Tesla Autopilot verdict being upheld is also a signal worth heeding for anyone whose AI marketing may be outpacing actual capabilities.
As always, I am grateful for this community. I would love to hear what topics are top of mind as we move into spring. My inbox is always open!
Warmly,

Artificial Intelligence
Federal AI legislation is advancing on multiple bipartisan fronts, with the CLEAR Act requiring training data disclosure for every generative AI model already on the market, a forthcoming framework bill comparing the moment to the Communications Act of 1934, and the FTC resolving its latest AI-washing enforcement action with a dual-prong settlement template that companies must now read carefully, all while the FTC signals that AI-capability claims will face scrutiny. At the state level, courts are answering AI-related legal questions that practitioners have been asking theoretically for years, and the answers carry immediate, concrete compliance implications. A federal court has held that AI-generated documents shared with counsel are not privileged. A $243 million verdict against Tesla has been upheld on the theory that users cannot be blamed for systems marketed beyond their capabilities. And state legislatures are advancing attorney ethics bills, minor protection laws, data center moratoriums, and algorithmic pricing prohibitions at a pace that makes the state-law patchwork more consequential by the month.
- The Southern District of New York held that documents created with Anthropic's Claude and later shared with counsel are not protected by attorney-client privilege
- If the CLEAR Act is enacted, every generative AI model currently available to consumers would face disclosure requirements about training datasets.
- Attorneys in California may not enter client confidential information into AI and are required to verify work done by AI on their behalf.
Federal
CLEAR Act Would Require AI Companies to Disclose Copyrighted Training Data. Senators Adam Schiff (D-CA) and John Curtis (R-UT) introduced the Copyright Labeling and Ethical AI Reporting (CLEAR) Act on February 10, 2026, requiring companies to submit a notice to the Register of Copyrights detailing all copyrighted works used in training datasets before public release of any generative AI model. The Copyright Office would maintain a publicly available online database of all notices filed, and civil penalties would apply for failures to submit required disclosures. Critically, the bill applies retroactively to generative AI models already available to consumers, meaning existing models would be covered. The bill is endorsed by SAG-AFTRA, WGA, DGA, RIAA, Authors Guild, and ASCAP, though the Motion Picture Association is notably absent. This matters because the retroactivity provision would catch every major AI model currently on the market, creating immediate compliance architecture questions around what counts as sufficiently detailed disclosure, how to document training datasets, and what civil penalty exposure looks like for non-compliance.
Senators Call for Oversight of AI Chatbots and Safeguards for Elder Fraud. Senate Aging Committee leaders Gillibrand and Scott sent a bipartisan letter to FTC Chairman Ferguson, urging the FTC to expand its inquiry into generative AI companions to include older adults. The Senators also warned of the risk AI poses to seniors being scammed and cited a 2025 Reuters/Harvard study showing that AI chatbots can be easily manipulated to craft convincing phishing emails.
FTC Resolves AI-Washing Case Against Growth Cave. The FTC resolved its enforcement action against Growth Cave, which marketed an AI software product called GrowthBox, claiming it would automate nearly 100 percent of the process of setting up and running an online education course. The FTC alleged the technology actually required users to upload advertisements manually, set appointments, and input messages, meaning the AI automation claims were false. The proposed settlement order bars defendants from misrepresenting that a product or service uses AI to maximize revenues or otherwise enhance profitability, effectiveness, or efficiency, covering both misrepresenting that a product uses AI when it does not and making misleading claims about AI performance, even if AI is in fact present. This matters because AI-washing enforcement is continuing under the Ferguson FTC, and the dual-prong framing signals that companies must be able to substantiate both the presence of AI and specific capability claims about what AI will actually do for customers.
States
California Passes AI Ethics Legislation for Lawyers. The California Senate passed SB 574 (introduced by Sen. Tom Umberg), codifying previous "guidance" into law as a direct response to the growing wave of attorneys submitting court filings containing AI-hallucinated citations. The legislation prohibits attorneys from entering client confidential information, PII, or other nonpublic data into public generative AI systems (e.g., ChatGPT), including SSNs, medical and financial information, addresses of parties and witnesses, and court-sealed materials. Lawyers must take reasonable steps to verify AI-generated content, correct hallucinations, and remove biased or harmful output, including work generated by others on their behalf. AI use cannot unlawfully discriminate or produce disparate impacts across a broad set of protected characteristics. Attorneys must consider (though not necessarily make) disclosure when AI is used to generate public-facing content. SB 574 signals that California intends to hold lawyers accountable for understanding the limitations of the tools they use. Other states are expected to follow. Businesses should ensure that internal legal departments and outside counsel are notified of this development and update, or implement internal and external policies on how attorneys may use AI. The bill now heads to Assembly committees before a potential vote ahead of the late August adjournment.
New York Fair News Act. New York's legislature is considering passing new legislation to address the use of generative AI when creating news content. The New York Fundamental Artificial Intelligence Requirements in News Act ( FAIR News Act) requires any news content that is substantially composed, authored, or created through the use of generative artificial intelligence include a visible disclaimer. It also mandates human editorial oversight and requires a person with editorial control to review and approve AI-generated content before publication. Organizations must disclose to employees how and when AI is deployed, and the bill includes privacy safeguards preventing AI systems from accessing confidential source information. The substantial-composition threshold and the human editorial-approval requirement could create significant compliance obligations for media, publishing, and AI content clients in New York.
New York Data Center Moratorium. New York introduced S9144, which proposes a three-year moratorium on permits for new data center construction across the state. National Grid New York reports that large load electrical connection requests tripled in one year, with additional demand expected over the next five years. If passed, the moratorium will run for at least three years. This development will directly impact AI infrastructure investment and could serve as a template for other high-density grid states watching energy costs climb.
New York Court Holds AI-Generated Documents Not Protected by Attorney-Client Privilege. In U.S. v. Heppner, the U.S. District Court for the Southern District of New York ruled that documents a criminal defendant created using Anthropic's Claude AI assistant and subsequently shared with his attorneys were not protected by attorney-client privilege or work-product doctrine. Judge Jed S. Rakoff found that the 31 AI-generated documents—created by the defendant to conduct queries related to a government investigation—did not constitute confidential communications seeking legal advice from counsel and noted that Claude's privacy policy indicates user inputs may be shared with third parties. The court rejected arguments that the documents reflected defense counsel's legal strategy, emphasizing that counsel neither directed nor was involved in creating them. This ruling has immediate implications for any business facing litigation or government investigation. Employees and executives who use AI tools to research legal questions, analyze potential liability, or prepare materials they later share with counsel may be creating discoverable evidence rather than privileged communications. Companies should update their AI usage policies to address this risk, consider requiring that any AI-assisted legal research be conducted only at counsel's explicit direction and documented as such, and evaluate whether enterprise AI platforms with stronger confidentiality protections are appropriate for sensitive matters. The decision also suggests that including language in AI prompts indicating research is being conducted at counsel's direction may help preserve privilege claims.
Ohio Introduces Bipartisan Bill to Ban AI-Driven Pricing Algorithms. House Bill 665, introduced by Representatives Christine Cockley (D-Columbus) and Tex Fischer (R-Boardman), would prohibit pricing algorithms that collect non-public competitor data such as future pricing, rent amounts, customer lists, and internal business plans. The bill would make using or distributing an AI-based algorithm trained on non-public competitor data a violation of Ohio's Valentine Antitrust Act. It would require disclosure from businesses with $5 million or more in gross receipts. A companion Senate Bill 79 received a committee hearing in March 2025, with Senator Willis Blackshear citing a December 2024 White House report showing pricing algorithms add an average of $70 per month to rent in algorithm-utilizing buildings. HB 665 has been assigned to the House Technology and Innovation Committee, while SB 79 remains in the Senate Financial Institutions, Insurance, and Technology Committee. This matters because Ohio would join California and New York in a fast-moving national trend to regulate algorithmic pricing, and companies in AI voice services, PropTech, or any platform that sets prices dynamically should monitor this legislative wave closely. NBC4 WCMH-TV
Oklahoma House Committee Passes AI Governance and Minor Protection Bills. The Oklahoma House Government Modernization and Technology Committee unanimously passed two AI bills authored by Committee Chairman Representative Cody Maynard (R-Durant). House Bill 3545 would establish responsible standards for state agency use of AI, requiring human oversight for high-risk decisions, transparency whenever AI-generated content is used, and annual reporting on AI tools deployed in state government. House Bill 3546 would explicitly clarify that AI systems and other non-human inanimate objects will not be granted personhood in Oklahoma. A third bill, HB 3544, referred to the Civil Judiciary Committee, would protect minors from AI systems designed to simulate human-like relationships by prohibiting the deployment of social AI companions and human-like AI chatbots to minors. Both passed bills must clear the Commerce and Economic Development Oversight Committee before reaching the full House. This matters because Oklahoma joins a growing number of states filling the regulatory vacuum on AI governance while Congress stalls, and the minor protection bill responds directly to widely reported lawsuits alleging that AI-companion platforms foster emotional dependency in minors.
Tesla Autopilot Verdict Upheld at $243 Million. A Florida federal court upheld a $243 million jury verdict against Tesla over a fatal 2019 crash involving its Autopilot system, rejecting the automaker's motion for a new trial or reduced damages. The jury found Tesla 33% liable for a crash that killed a pedestrian, awarding $42.57 million in compensatory damages and $200 million in punitive damages after finding that Autopilot was defective because Tesla allowed drivers to engage it on roads for which it was not designed and failed to monitor driver attention adequately. This case signals growing judicial willingness to hold technology companies accountable when their AI marketing outpaces actual capabilities. The plaintiffs successfully argued that Tesla "set the stage" for the crash by overhyping Autopilot's features despite knowing its vulnerabilities—a theory with direct application to any business deploying AI-powered tools. Companies should take note: the "user error" defense is increasingly insufficient when systems are marketed with capabilities they cannot safely deliver. Punitive damages exposure is real when companies are aware of limitations but fail to implement adequate safeguards or appropriately constrain system use.
Privacy
Federal privacy engagement this month focuses on restrictions on national security data and telecommunications cybersecurity. State privacy developments this month include enforcement reports, genetic data legislation, comprehensive privacy bills, a Supreme Court cert grant, and two landmark rulings. Here are the key takeaways:
- Federal Agencies focus on national security and cybersecurity. The FTC sent warning letters to 13 data brokers regarding military personnel data. PADFAA uses a different data-broker definition than state privacy laws, covers data touching military status, and prohibits transfers to foreign adversary countries. The FCC's ransomware guidance establishes agency expectations for carrier cybersecurity preparedness that will be used against carriers in enforcement proceedings following preventable incidents.
- Courts examine key liability issues. The courts are examining liability issues related to privacy disclosures and the application of BIPA's 2024 amendment to pending cases.
- Regulators disclose enforcement priorities. Connecticut CTDPA identified three specific priority areas, including breach notification delays, cookie banners that fail to provide meaningful opt-out mechanisms, and failure to honor universal opt-out signals.
Federal
FCC Issues Ransomware Defense Notice. The Federal Communications Commission's Public Safety and Homeland Security Bureau issued Public Notice DA 26-96 on January 29, 2026, highlighting best practices that communications providers can implement to defend against ransomware attacks. The guidance comes in response to a significant increase in ransomware incidents affecting the communications sector, including a four-fold increase in attacks since 2021, and recent incidents involving small-to-medium-sized communications providers that disrupted service, exposed sensitive information, and locked providers out of critical files. The notice addresses best practices for preventing ransomware attacks, including network segmentation, backup protocols, incident response planning, and employee training, as well as guidance on responding to an attack and existing reporting obligations. This matters because, while the Public Notice does not impose new regulatory requirements, it signals regulatory expectations for telecommunications cybersecurity practices. It may inform enforcement decisions in cases involving preventable ransomware incidents. Carriers should document their ransomware preparedness measures and ensure alignment with the FCC's recommended practices.
FTC Sends Warning Letters to Data Brokers on PADFAA Compliance. The Federal Trade Commission sent warning letters to thirteen data brokers on February 9, 2026, cautioning them of requirements under the Protecting Americans' Data from Foreign Adversaries Act and urging a comprehensive review of their data practices. PADFAA prohibits data brokers from selling, licensing, transferring, or otherwise providing access to personally identifiable sensitive data of U.S. individuals to foreign adversary countries, including China, Iran, North Korea, and Russia. The template letter released by the FTC indicates that military personnel data is the immediate enforcement focus, stating that the agency has identified instances in which companies offer solutions that involve an individual's status as a member of the Armed Forces. Potential penalties range up to $53,088 per violation. This matters because PADFAA is a national security overlay on the standard privacy compliance stack with a data broker definition that differs from state privacy laws, and companies in AI, AdTech, or any platform that aggregates and resells consumer data touching military status, minors, or geolocation face active and growing enforcement risk.
States
Connecticut Releases 2025 CTDPA Enforcement Report. Connecticut Attorney General William Tong released the state's 2025 enforcement report on the Connecticut Data Privacy Act, detailing enforcement priorities and compliance concerns identified during the first full year of enforcement. By the end of 2025, the Office had issued dozens of notices of violations and warning letters, finalized multiple data breach settlements, including a $105,000 settlement with Omni Healthcare for waiting more than fourteen months to report a breach, and resolved its first enforcement action under the CTDPA. The report identifies key focus areas, including breach notification timing with several companies delaying notifications beyond the sixty-day statutory window, problematic cookie banner implementations that failed to provide meaningful opt-out mechanisms, and inadequate support for universal opt-out signals, with the AG emphasizing that businesses must honor Connecticut residents' opt-out requests regardless of the mechanism used to submit them. This matters because Connecticut's enforcement report provides valuable insight into the AG's interpretation of CTDPA requirements and enforcement priorities, and companies operating in Connecticut should review their breach notification procedures, cookie consent mechanisms, and universal opt-out signal recognition against the standards articulated in the report.
Connecticut Genetic Data Privacy Legislation. Connecticut legislators introduced HB 5128, a genetic privacy bill, in response to the 23andMe data breach, which exposed the genetic information of millions of consumers. Attorney General Tong submitted testimony in support of the legislation, noting that the company had gathered the DNA of over 15 million people before a threat actor stole records from over six million customers, and that bad actors used that genetic data to target people of Jewish and Chinese descent, threaten disclosure of genetic ancestry information of celebrities and world leaders, and create applications to block visitors to websites based on race and ethnic origin. The proposed bill would grant Connecticut residents exclusive control over their biological material, their DNA, and the results of any analysis of their DNA. It would require companies that collect DNA to obtain express consent for any use not previously communicated, as well as before any sale or transfer, and to implement reasonable security measures to protect consumers' biological samples and genetic data from unauthorized access or disclosure. This matters because the 23andMe breach highlighted the unique sensitivity of genetic data and the inadequacy of general privacy frameworks to address genetic privacy concerns. Companies collecting or processing genetic information should anticipate heightened regulatory requirements and prepare for state-specific genetic privacy legislation as Connecticut and other states respond to consumer concerns about genetic data security.
New Jersey Amends Comprehensive Privacy Law. New Jersey's comprehensive privacy law was amended at the end of January 2026, with amendments taking effect immediately upon Governor Murphy's signature on his final day in office. The amendments also expand the HIPAA-related exemption to cover information treated as protected health information, broadening the carve-out to cover health-adjacent data handled under HIPAA-equivalent standards, even if not technically PHI. The definition of de-identified data has also been modified to include situations under HIPAA where data recipients are contractually prohibited from re-identifying the data. This matters because enacted state privacy laws are not static. States are iterating on their frameworks in real time, meaning compliance programs must be treated as living documents rather than one-time implementations. The HIPAA-adjacent carve-out expansion is particularly relevant for health tech, telehealth, or AI health platform clients handling data that straddles PHI and general consumer health data.
Maine House Passes Strong Privacy Bill. A comprehensive privacy bill, LD 1822, passed the Maine House of Representatives on February 10, 2026. The bill closely mirrors the privacy law Maryland enacted in 2024. It would extend essential privacy protections to Mainers, including strong data minimization requirements, enhanced protections for sensitive data, and civil rights protections prohibiting data-driven discrimination. The Maine Online Data Privacy Act would provide meaningful limits on personal data collection and the use of sensitive data, including precise geolocation data, prohibit the sale of sensitive data, including precise geolocation data, and include enhanced protections for minors' personal data, including prohibitions on its sale and use for targeted advertising. The ACLU of Maine noted that the bill would prevent DHS and ICE from using cell phone location data to track people without a warrant. This matters because Maine LD 1822's data minimization approach and prohibition on geolocation data align with California AG Bonta's surveillance pricing investigations, and states advancing consumer-protective privacy frameworks rather than industry-written templates are creating meaningful compliance obligations that will shape the national privacy landscape.
VPPA Supreme Court Cert Petition and CIPA Appellate Uncertainty. On January 26, 2026, the U.S. Supreme Court granted certiorari in Salazar v. Paramount Global to resolve a circuit split regarding the scope of the Video Privacy Protection Act (VPPA). The Court will consider whether the phrase goods or services from a videotape service provider refers to all of a provider's goods or services or only to its audiovisual goods or services. The case stems from allegations that the plaintiff subscribed to a newsletter from a sports entertainment website owned by Paramount, watched videos on the site, and that Facebook's Meta pixel caused the browser to transmit his viewing habits to Facebook without consent. The Sixth Circuit held that Salazar was not a consumer because he subscribed only to a digital newsletter rather than to audiovisual content, splitting with the Second and Seventh Circuits. Meanwhile, appellate courts are reaching inconsistent conclusions regarding the application of the California Invasion of Privacy Act to website tracking technologies, creating uncertainty about the viability of claims based on analytics tools, session replay software, and advertising pixels. This matters because website tracking litigation under VPPA and CIPA has generated significant class action activity, with inconsistent rulings creating litigation risk for companies operating websites with video content or third-party analytics tools. Companies should monitor appellate developments and evaluate their tracking technology implementations against evolving legal standards, with a Supreme Court decision anticipated in the first several months of 2027.
California SB 923: CCPA Data Deletion Expansion. California introduced SB 923, the Expanding Privacy Rights Act sponsored by the California Privacy Protection Agency (CalPrivacy), which would expand the California Consumer Privacy Act's deletion rights to require businesses to delete all personal information held about a consumer, including information collected directly from the consumer, regardless of how the business obtained it. Under the proposed legislation, California consumers would gain the right to request deletion of personal information obtained from third-party sources, such as data brokers, in addition to information they provide themselves, closing a critical gap in privacy protections by addressing the widespread practice of businesses supplementing consumer records with data purchased from external sources. This expansion will significantly increase compliance burdens for businesses that share consumer data with advertising partners, data brokers, and service providers. Companies will need to implement data lineage tracking and establish mechanisms for honoring deletion requests across their entire data ecosystem, a substantial operational undertaking for organizations with complex data-sharing arrangements.
California Privacy Whistleblower Protection and Privacy Act. California Assemblymember Pilar Schiavo introduced AB 2021, the Whistleblower Protection and Privacy Act, sponsored by the California Privacy Protection Agency (CalPrivacy). The legislation proposes amending the California Consumer Privacy Act (CCPA) to establish the first whistleblower complaint-and-award program for privacy law violations. The bill would allow individuals to submit whistleblower complaints to CalPrivacy regarding companies' privacy practices, with eligible whistleblowers receiving between fifteen (15) and thirty-three (33) percent of fines collected through administrative enforcement actions or settlements resulting from their complaints. To qualify for an award, whistleblowers must provide original information based on independent knowledge or analysis, must be represented by an attorney, and must declare under penalty of perjury that the information submitted is true and correct. The legislation includes comprehensive anti-retaliation protections, creating a new standalone cause of action allowing employees, contractors, or agents to sue for retaliation related to CalPrivacy whistleblowing, with remedies including reinstatement, double back pay plus interest, compensatory damages, and attorneys' fees. If passed, this will be the first privacy-specific whistleblower regime in the United States, modeled on successful programs like the SEC whistleblower program, and would create significant new compliance risks by financially incentivizing employees, contractors, and others with insider knowledge to report potential violations. Businesses should evaluate whether their internal privacy compliance programs include mechanisms for employees to raise concerns internally before seeking external whistleblower awards.
Texas AG Investigates Conduent Data Breach Affecting Four Million Texans. Texas Attorney General Ken Paxton announced an investigation into business services provider Conduent Business Services and Blue Cross Blue Shield of Texas following a data breach that exposed the sensitive personal data of approximately four million Texans, which Paxton characterized as likely the largest breach in U.S. history. The breach, which occurred between October 21, 2024, and January 13, 2025, resulted from an unauthorized third party accessing Conduent's systems and obtaining files containing the protected health information of Texas residents, including Texas Medicaid recipients. Compromised data may include names, Social Security numbers, medical information, and health insurance details, and the Attorney General issued Civil Investigative Demands to both companies seeking documents and evidence regarding BCBSTX's compliance with state laws protecting confidential information, as well as Conduent's security measures, communications, and compliance with Texas law. This matters because the investigation underscores the cascading liability exposure when a single vendor breach affects multiple covered entities and millions of individuals. Healthcare organizations using third-party service providers for functions requiring access to protected health information should evaluate whether their vendor management programs include adequate security requirements, audit rights, breach notification obligations, and indemnification protections.
Federal Judge Upholds $425 Million Google Privacy Verdict. U.S. District Judge Richard Seeborg denied Google's motion in Rodriguez v. Googleto decertify the class and vacate a $425.7 million jury verdict finding Google liable for collecting mobile users' data analytics after they attempted to block data collection. The Order also rejected plaintiffs' request for an additional $2.36 billion in disgorgement of profits and a permanent injunction. The class action alleges that Google's "Web & App Activity" setting, which users toggled off believing it would stop data collection, failed to prevent Google from gathering app-related data through its Analytics for Firebase code embedded in approximately ninety-seven (97) percent of top Android apps and fifty-four (54) percent of leading iOS apps. The jury found Google liable for invasion of privacy and intrusion upon seclusion after determining that the language in Google's privacy policy was not obvious to the average user, with one juror noting that users are generally skimmers, not readers. This matters because the ruling represents one of the largest privacy verdicts against a technology company and establishes that companies can face class-wide liability for collecting user data through mechanisms that operate separately from user-facing privacy controls, even when such collection is disclosed somewhere in privacy policies. Businesses should also take note of the jury's finding that average users skim rather than read privacy policies when designing and communicating data collection practices in privacy policies.
Seventh Circuit Reviewing if the 2024 BIPA Amendment is Retroactive. The Seventh Circuit heard oral arguments on whether Illinois' August 2024 amendment to the Illinois Biometric Information Privacy Act (BIPA), which limits damages to one recovery per person when biometrics are repeatedly collected using the same method, applies retroactively to pending lawsuits, with one judge noting billions of dollars of consequences turn on how we label the change. The amendment responded to the Illinois Supreme Court's Cothron v. White Castle System, Inc., 2023 IL 128004 (2023) decision that found damages for every biometric scan. The amendment, however, is silent on retroactivity, and federal and Illinois state courts remain split, with the growing consensus in federal courts being prospective-only application. If the Seventh Circuit agrees, any BIPA violation before August 2, 2024, would be analyzed under the original statute, potentially exposing companies to enormous per-scan damages through August 2029, given the five-year limitations period. Companies with pre-amendment exposure are advised to assume the amendment does not apply for settlement and litigation strategy purposes.
TCPA Class Action Filed Against Omaha Steaks Over Unsolicited Marketing Texts. A new TCPA class action complaint in Nelson v. Omaha Steaks has been filed against Omaha Steaks, alleging that the company sent unsolicited marketing text messages from short code 51803 promoting buy-one-get-one-free offers. Plaintiff Justin Nelson of Michigan claims he received multiple promotional text messages intended for someone else and that his number was registered on the National Do Not Call Registry at the time. This case follows a familiar wrong-number TCPA pattern in which a company's marketing texts reach someone other than the intended recipient, potentially exposing the sender to statutory damages of $500 to $1,500 per message. Companies using SMS marketing should ensure robust consent verification processes, maintain accurate phone number databases, implement procedures to honor opt-out requests and scrub against the DNC registry, and establish protocols for handling wrong number complaints before they escalate to litigation.
Marketing and Consumer Protection
Federal and state consumer protection enforcement this month reflects that regulators and courts are holding platforms, marketplaces, and product marketers accountable for the gap between what they represent to consumers and what they deliver. The FTC warned Apple that undisclosed algorithmic decisions are a consumer protection matter, quantified an illegal cancellation flow in the Uber complaint, and backed the SCAM Act's advertiser verification mandate. California is seeking injunctive relief against Amazon for vendor pricing coercion, Georgia is applying child safety scrutiny to Roblox, and litigation against Tesla, Walmart, Costco, and Hims and Hers is advancing theories that require substantiation before a claim runs, not after a lawsuit forces a correction. Here are the key takeaways:
- Undisclosed platform practices. Regulators are taking action when a company's actions diverge from stated terms or advertising claims are unsubstantiated.
- User agreement disclaimers cannot cure misleading product names or label claims. Tesla, Costco, and Hims and Hers all face theories requiring affirmative substantiation before the claim runs a buried disclaimer does not fix a false net impression.
- Company Claims Challenged in Class Actions. Consumers continue to pursue litigation to hold companies accountable for their advertising claims.
Federal
Bipartisan SCAM Act Targets Fraudulent Social Media Advertising. Senators Ruben Gallego (D AZ) and Bernie Moreno (R OH) introduced the Safeguarding Consumers from Advertising Misconduct Act, bipartisan legislation that would require social media platforms and other online services to take reasonable steps to prevent fraudulent and deceptive advertisements or face enforcement action by the Federal Trade Commission and state attorneys general. The bill follows a Reuters investigation reporting that Meta internally estimated approximately 10 percent of its overall revenue, roughly $16 billion, could be tied to ads for scams and other prohibited products. Yet, the company allegedly allowed flagged advertisers to continue running ads rather than removing them. Under the proposed legislation, platforms would be required to verify advertisers using government-issued identification or documentation proving legal business existence before allowing ads to appear, promptly investigate user and government reports of suspected fraud, and provide users with better tools to report fraudulent activity. This legislation represents the most significant proposed federal mandate requiring affirmative platform responsibility for the integrity of advertising content, moving beyond voluntary self-regulation.
FTC Chairman Issues Warning Letter to Apple Over Alleged Apple News Ideological Bias. Federal Trade Commission Chairman Andrew N. Ferguson issued a warning letter to Apple CEO Tim Cook following reports that Apple News systematically boosts left-wing news sources while suppressing right-wing sources. The letter reminds Apple of its obligations to customers. It warns that such practices could violate the FTC Act under three theories if content curation is inconsistent with Apple's terms of service, if failure to disclose ideological favoritism constitutes a material omission contrary to consumers' reasonable expectations, or if such practices cause substantial injury that is neither reasonably avoidable nor outweighed by countervailing benefits. This letter signals a potentially significant shift in FTC enforcement philosophy toward treating algorithmic content-curation decisions that involve undisclosed ideological preferences as potential consumer protection violations. For technology platforms that operate news aggregation, social media feeds, or search functions, the letter suggests the Commission may scrutinize whether content ranking and suppression practices are adequately disclosed to users and consistent with the stated terms of service.
FTC and 21 State Attorneys General File Amended Complaint Against Uber for Subscription Practices. The Federal Trade Commission and a bipartisan coalition of 21 state attorneys general filed an amended complaint against Uber Technologies Inc. and Uber USA LLC, challenging the company's Uber One subscription service practices. The amended complaint alleges that Uber made deceptive savings claims to induce subscriptions, enrolled consumers without proper consent during free trial periods, charged customers before subscription periods had ended, and implemented unnecessarily difficult cancellation processes requiring navigation through at least 12 actions and seven (7) screens. In addition to the District of Columbia, participating states include Alabama, Arizona, California, Connecticut, Illinois, Maryland, Michigan, Minnesota, Missouri, Montana, Nebraska, New Hampshire, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, Virginia, West Virginia, and Wisconsin. The multistate coalition seeks consumer restitution, civil penalties for violations of the Restore Online Shoppers' Confidence Act and state consumer protection laws, and injunctive relief to halt the allegedly deceptive practices. This action signals coordinated federal and state enforcement focus on subscription service dark patterns and negative option marketing, and companies offering subscription products should review their enrollment flows, savings representations, free trial disclosures, and cancellation mechanisms for compliance with both FTC and state consumer protection standards.
State
California AG Seeks Injunction Against Amazon for Alleged Price-Fixing Scheme. California Attorney General Rob Bonta filed a motion for preliminary injunction against Amazon, citing newly uncovered evidence that the e-commerce giant pressured vendors to raise prices on competing retailers' websites or remove products entirely, using its "overwhelming bargaining leverage" and threatening "dire consequences" for noncompliance. The motion, filed in San Francisco County Superior Court as part of a 2022 lawsuit alleging violations of the Cartwright Act and California Unfair Competition Law, asserts that vendors are "coerced into acting as intermediaries" between Amazon and its competitors to fix retail prices. This case highlights how dominant platforms can leverage their market position to manipulate pricing across an entire ecosystem. For businesses that sell through Amazon or other major platforms, the allegations underscore the legal risks of acquiescing to platform demands that may constitute anticompetitive conduct, even when framed as standard vendor compliance. Companies should carefully document any platform communications regarding competitor pricing and consult antitrust counsel before agreeing to pricing restrictions that extend beyond the platform itself. The case also signals that state attorneys general are increasingly willing to pursue aggressive discovery and injunctive relief in platform competition cases.
Georgia AG Opens Roblox Child Safety Investigation. Georgia Attorney General Chris Carr has issued Civil Investigative Demands to Roblox Corporation as part of an investigation into child safety practices on the platform. The investigation focuses on Roblox's handling of abuse reports, content moderation policies, and age verification mechanisms, seeking documents and information on how Roblox responds to reports of inappropriate contact with minors, the effectiveness of its automated content moderation systems, and measures to prevent adult users from contacting children on the platform. The Georgia investigation signals that state attorneys general are expanding child safety enforcement beyond traditional social media platforms to include gaming and virtual world platforms, and companies operating online services used by minors should evaluate their content moderation, abuse reporting, and age verification practices against the heightened scrutiny these platforms now face from state regulators.
Maine Bureau of Financial Institutions Issues NSF Fee Guidance. The Maine Bureau of Financial Institutions issued Bulletin 83 declaring it an unfair practice for state-chartered institutions to charge multiple non-sufficient funds fees when the institution provides inaccurate disclosures about re-presentment and fee practices, or when, even with accurate disclosures, multiple fees are assessed in a short period without adequate notice or opportunity for customers to bring their accounts current. The guidance reflects growing regulatory concern over fee stacking practices that can significantly compound consumer harm from a single insufficient funds event.
Massachusetts AG Reaches $4.65 Million Settlement with Mortgage Servicer. Massachusetts Attorney General Andrea Joy Campbell announced a $4.65 million settlement with Newrez LLC, as successor by merger to Specialized Loan Servicing LLC, resolving allegations of widespread unfair and deceptive servicing practices. The settlement addresses claims that the servicer sent cure notices with shorter than required cure periods of 33 days instead of the legally mandated 90 days, failed to notify consumers of loan modification rights, failed to process modification requests lawfully, ignored COVID 19 relief obligations, omitted required debt validation notices, and exceeded Massachusetts debt collection communication frequency limits of two calls per week. The settlement includes significant restitution to hundreds of consumers who experienced foreclosure while subject to the alleged unlawful practices.
New Hampshire AG Settles with PayPal for $1.75 Million Over Deceptive Practices. New Hampshire Attorney General John Formella announced a $1.75 million settlement with PayPal Inc. and PayPal Holdings Inc., resolving allegations of unfair and deceptive practices on its PayPal and Venmo platforms. The investigation found that PayPal deceptively advertised 24/7 access to funds while actually freezing consumer accounts, misrepresented a purchase protection product with difficult-to-access benefits, and failed to disclose its privacy practices for sensitive financial data adequately. The settlement requires PayPal to make significant changes to both platforms and reform its marketing and operations, addressing practices that particularly affected low-income residents who lack access to traditional banking and rely on these platforms for essentials like rent, groceries, childcare payments, and government assistance funds.
NYC Department of Consumer and Worker Protection Sues Solar Panel Installer. The New York City Department of Consumer and Worker Protection filed a lawsuit against Radiant Solar and its owner, William James Bushell, alleging the company defrauded New Yorkers seeking affordable renewable energy for their homes. The complaint alleges that Radiant Solar misrepresented energy savings through false immediate savings claims, ignored post-installation complaints, failed to secure promised solar tax incentives, imposed undisclosed dealer fees totaling approximately $3 million, and steered consumers into large loans without their knowledge or consent in violation of NYC law prohibiting home improvement contractors from acting as agents for lenders. DCWP is seeking at least $1,752,225 in civil penalties, approximately $18 million in restitution, and a company shutdown, along with personal liability for the owner. This represents New York City's first legal action against an allegedly fraudulent solar panel installation company and the most restitution DCWP has ever sought in a home improvement contractor case.
New York AG Secures $2.4 Million Debt Relief in Lease-to-Own Settlement. New York Attorney General Letitia James announced a settlement with Monterey Financial Services, providing approximately $2.4 million in debt relief for 835 New York consumers who were misled by lease-to-own agreements for consumer goods, including furniture, pets, wedding dresses, and car repairs that consumers mistakenly believed were traditional purchase financing. The investigation found that Monterey charged illegal fees for convenience, threatened consumers with a fictitious legal department, and threatened to repossess consumers' pets for payments they did not understand were lease obligations. In one cited example, a consumer who believed they were purchasing a $2,000 puppy ended up paying $3,592.95 after all fees and monthly payments. The settlement requires Monterey to pay $175,000 in penalties, cancel all outstanding New York leases, cease collecting on any lease originated debt, and request that consumer reporting agencies remove any negative credit impact from these leases.
Texas AG Targets Retailer Over Health-Risk Disclosures. Texas Attorney General Ken Paxton filed suit against a retailer of chest binders, compression garments used to flatten chest tissue, alleging that the company failed to adequately disclose the products' health risks, particularly when sold to minors. The complaint alleges violations of Texas consumer protection law arising from the company's failure to warn consumers of potential risks, including breathing difficulties, rib fractures, and skin irritation. The enforcement action is part of a broader pattern of Texas AG investigations targeting products marketed to transgender youth, and signals continued Texas AG focus on companies selling products to minors where health risks may not be adequately disclosed.
Attorney General Coalition Submits Comment Letter Opposing CFPB ECOA Disparate Impact Rule. A coalition of 21 state attorneys general (AGs) led by California Attorney General Rob Bonta submitted a joint comment letter opposing the Consumer Financial Protection Bureau's proposed rule that would eliminate the disparate impact test for credit discrimination under the Equal Credit Opportunity Act. The coalition argued that removing the disparate impact standard contradicts ECOA's statutory purpose and would effectively enable credit discrimination by requiring proof of intentional discrimination rather than discriminatory effects. The attorneys general contend that the proposed amendments violate the Administrative Procedure Act because they are contrary to the text and purpose of ECOA, and that proper reading of the statute compels the conclusion that it authorizes disparate impact liability consistent with Supreme Court precedent under the Fair Housing Act. This comment letter reflects substantial state-level opposition to the proposed regulatory change, and creditors should continue maintaining disparate impact testing and fair lending compliance programs, given the likelihood of continued state enforcement regardless of federal regulatory changes.
Financial Regulator Settlement for SAFE Act Violations. A multistate coalition of twenty (20) state financial regulators settledenforcement actions against a mortgage loan originator (MLO) who falsely claimed credit for continuing education courses he never completed in violation of the SAFE Act. Penalties varied by state, with Colorado and Florida imposing $7,000 each, most other states imposing $1,000, and Maryland and New Mexico, where licenses were pending, imposing no monetary penalty. The MLO is permanently barred from obtaining mortgage originator licensure in most participating states. This matters because the coordinated enforcement action demonstrates state regulators' information-sharing capabilities and willingness to pursue multistate discipline for licensing fraud, and mortgage companies should ensure their MLO compliance programs include verification of continuing education completion rather than relying solely on employee self-reporting.
Litigation and NAD
Credit One Bank Pays $10.2 Million to Settle Debt Collection Harassment Claims. Credit One Bank agreed to pay $10.2 million, comprising $9 million in civil penalties and $1.2 million in investigative costs, to settle a lawsuit brought by the California Debt Collection Task Force, a statewide team comprising the district attorneys' offices in Santa Clara, San Diego, Los Angeles, and Riverside counties. The judgment, entered on February 19, 2026, in Riverside County Superior Court, ordered the bank and its agents to implement policies and procedures to prevent unreasonable and harassing debt collection calls to California consumers. The coordinated multi-county enforcement action demonstrates the growing sophistication of state-level consumer protection efforts against financial institutions. It signals that California prosecutors will pursue significant remedies for debt collection practices that violate consumer protection standards, even when conducted by federally chartered banks.
Estée Lauder Sues Walmart Over Counterfeit Beauty Products. Estée Lauder filed a federal lawsuit against Walmart in the Central District of California, alleging that counterfeit versions of its prestige brands, including Estée Lauder, La Mer, Clinique, Le Labo, Aveda, and Tom Ford, were sold on Walmart.com. The complaint asserts trademark infringement, false designation of origin, trade dress infringement, unfair competition, and vicarious trademark liability, targeting Walmart directly rather than third-party sellers, arguing that the platform was sufficiently involved to be treated as the seller. Estée Lauder points to Walmart's control over checkout, payment, customer service, and returns, its use of trademarks in SEO tools, and its public representations that it selects and partners with marketplace sellers. Walmart is expected to invoke Section 230 of the Communications Decency Act, which it used previously in a class action that settled before the defense was resolved. This matters because the theory that platform involvement, SEO use of trademarks, and revenue sharing creates liability beyond mere hosting is essentially a roadmap for how brands might pierce Section 230 in the future, particularly if Congress amends or sunsets the statute.
Costco Faces Class Action Over No Preservatives Rotisserie Chicken Claims. Two California consumers filed a class action in the Southern District of California on January 22, 2026, alleging Costco systematically cheated customers by falsely advertising that its Kirkland Signature Seasoned Rotisserie Chicken had no preservatives. The lawsuit alleges that the chicken contains both sodium phosphate and carrageenan, additives used as preservatives, and that the "no preservatives" claims appeared prominently on in-store signage and the website. In contrast, any mention of these ingredients appeared only in small print on the back of the label. Claims include violations of Washington's Consumer Protection Act and California's Consumers' Legal Remedies Act, Unfair Competition Law, and False Advertising Law. Costco responded by removing all references to preservatives from its signage and online descriptions without admitting wrongdoing. This matters because it is a textbook clean-label false-advertising case riding the broader consumer trend toward preservative-free foods, and Costco pulling its signage almost immediately after the suit was filed makes it difficult to defend the claim as truthful while simultaneously removing it.
Hims and Hers Faces Class Action Over Compounded GLP 1 Claims. A class action lawsuit has been filed against Hims and Hers Health Inc., alleging false advertising in connection with the company's marketing of compounded GLP-1 medications for weight loss. The complaint alleges that Hims and Hers marketed its compounded semaglutide products as equivalent to FDA-approved medications like Ozempic and Wegovy, when the compounded versions have not undergone the same regulatory review for safety and efficacy. The lawsuit alleges that the company's advertising implied that compounded GLP-1 products would produce the same weight loss results as branded medications, without adequately disclosing the differences between compounded and FDA-approved products. The proliferation of telehealth platforms offering compounded versions of popular medications raises significant advertising and consumer protection concerns, and companies marketing compounded pharmaceuticals should carefully evaluate their advertising claims and ensure clear disclosures distinguishing compounded products from FDA-approved medications.
Tesla Challenges California DMV False Advertiser Designation. Tesla has filed suit against the California Department of Motor Vehicles, challenging the agency's determination that the company engaged in false advertising by naming and marketing its Autopilot and Full Self-Driving features. The DMV found that Tesla marketing materials created the impression that vehicles equipped with these features could operate autonomously without driver supervision, when in fact the systems require constant driver attention and control. Tesla argues that its marketing materials and user agreements clearly disclose the limitations of its driver assistance features, and that the DMV's interpretation exceeds the agency's regulatory authority. The litigation will test the extent to which state agencies can regulate advertising claims for emerging automotive technologies, and companies marketing advanced driver assistance features should carefully evaluate whether product naming and marketing materials create misleading impressions about system capabilities, regardless of disclaimers in user agreements.
Rawlings Faces Class Action Over Upgraded Bat Certification. A class action has been filed against Rawlings Sporting Goods, alleging that the company marketed baseball bats as having been upgraded to meet new USA Baseball certification standards when the bats were relabeled versions of previously certified products. The complaint alleges that Rawlings charged premium prices for upgraded bats that were materially identical to older inventory, exploiting consumer confusion about certification requirements to clear obsolete stock at inflated prices. The case illustrates an emerging theory of certification arbitrage fraud, where companies allegedly exploit transitions between product standards to mislead consumers about the nature of upgraded or recertified products.
NAD Rules on PrettyBoy Skincare Third-Party Ratings and Clinical Evidence Claims. The National Advertising Division issued a decision regarding PrettyBoy skincare advertising claims, addressing the substantiation requirements for both third-party rating references and clinical efficacy claims. NAD found that the company's use of third-party review ratings required additional context to avoid creating misleading impressions about the basis for the ratings, and that certain clinical claims required more robust substantiation than the company had provided. The decision guides how companies should present third-party endorsements and ratings in advertising, emphasizing that the context in which ratings are presented matters as much as the ratings themselves, and that companies using third-party ratings or clinical evidence in advertising should ensure that claims are presented with appropriate context and that clinical evidence meets the standards required for the specific claims being made.
Of Note | AI in Advertising
What the Super Bowl Did Not Tell You About AI Ads But Regulators Will
Every year, I watch the Super Bowl, but not for the game. The game is fine, and the halftime show is great. For me, the commercials are where the real action is, and this year, artificial intelligence (AI) stole the show in ways that are simultaneously impressive, instructive, and, from a legal standpoint, genuinely alarming.
Super Bowl LX delivered the most AI-saturated advertising spectacle in television history. Approximately twenty-three (23) percent of the commercials, or fifteen (15) of the sixty-six (66) commercials, featured AI in some form. Brands either promoted AI products or used AI tools to make the ads themselves. Yet, in nearly every case, the brands left an AI-disclosure off the screen.
AI transparency requirements for advertising are an emerging concept. Regulators have enforced against companies that have engaged in AI-washing, or overstating AI capabilities. Transparency requirements are still emerging, with states passing requirements in real time. Some of this is in response to consumer reactions to AI being used to create movie characters, models in print ads, and even in Super Bowl commercials.
We saw examples of both categories this year, including ads for AI products (e.g., Anthropic, OpenAI, Google, Meta, Genspark, and others) that operate in real time. Most consumers watching those spots knew AI was being demonstrated. Yet, in commercials where AI was used as a production tool to generate visuals, script content, depict younger versions of celebrities, or build entire spots, there was no on-screen notation indicating that AI created or significantly authored the work. I was not able to identify a single Super Bowl LX commercial that included a formal on-screen label stating the creative was AI-generated. The audience was not thrilled that AI was the headline, and according to AdWeek, felt that the message was hollow and not distinctive. Here are a few examples:
- Svedka: The First (Mostly) AI-Generated Super Bowl Ad. Vodka brand Svedka made history by running what it characterized as the first primarily AI-generated national Super Bowl spot. The thirty-second ad featured the brand's Fembot character, reconstructed using AI after four months of training models to simulate facial expressions and movement. The creative was openly discussed in press interviews. Inside the actual ad? No disclosure. The brand talked to the Wall Street Journal about its AI process, but it did not tell the 125 million people watching the game. That asymmetry, transparency with trade press, opacity with consumers, is precisely the pattern that regulators are beginning to scrutinize.
- Good Will Dunkin. The best AI-adjacent ad of the game, by most expert measures, was Dunkin's Good Will Dunkin' spot, which was a 90s sitcom-style parody featuring Ben Affleck, Jennifer Aniston, and Jason Alexander. The ad used AI technology to feature celebrities as they appeared in the 90s. This was an effective way to use AI and generated great Super Bowl engagement, without feeling like a technological demonstration.
- Amazon Alexa. Amazon's spot starred Chris Hemsworth in a satirical AI is out to get me storyline, introducing the new Alexa+ and showcasing its enhanced capabilities. Charming enough. The ad actually communicated a highly capable AI assistant that can manage your home, plan your vacation, and respond to natural-language queries in real time. The ad never disclosed that an AI voice powers Alexa+, that the visuals used (e.g., the fight with a bear) were AI-generated, or that Alexa's responses were AI-generated. For a product that sits in people's homes and listens, that is a meaningful omission.
- Ferris Bueller. The Genspark spot starring Matthew Broderick channeling Ferris Bueller was cleverly cast. Broderick gives you joyful rebellion against pointless authority, with the message being that viewers can let AI handle their work on Monday so that they can take the day off. The ad reportedly used a script generated by Genspark itself to demonstrate the product's capabilities. While it was a great concept, legally, it is an object lesson in implied claims. The ad's net impression to a reasonable consumer is that AI can handle your professional workload autonomously and reliably. That is ready to send to your boss, or better yet, your clients. That claim is materially incomplete. J.Crew: A Cautionary Tale
AI-Generated Advertisements in 2025
Not only were we seeing these trends play out on television, but we are also seeing them in print and digital ads. In August 2025, J.Crew posted an Instagram campaign promoting its collaboration with Vans. The images looked like photographs with men in preppy Americana settings, boats, bikes, studios, muted colors, the kind of aesthetic the brand built over decades. On closer inspection, the images were riddled with AI artifacts, such as a foot bending backward, stripes that dissolved into static, and hands merging with handlebars.
When the style blog Blackbird Spyplane broke the story, J.Crew initially said nothing. When pressed, the brand released a statement that did not acknowledge AI, saying it is always experimenting with new forms of creative content. Only after public backlash did J.Crew add a caption credit to digital art by @samfinn.studio, which is an account belonging to an artist who describes himself as an AI photographer. However, the brand never explicitly stated that it used AI-generated models or imagery. Apart from being transparent with customers, these ads raised a legal question about whether the depictions constitute false and deceptive advertising. Courts do not yet settle the answers. But with New York's synthetic performer law taking effect on June 9, 2026, the direction of travel is clear. Businesses need to include disclosures when using AI in this way.
Vogue ran a U.S. Guess campaign in its August 2025 issue that included an AI-generated model. This ad took a different approach and included a disclosure. Each image reportedly carried a notation that the visuals were produced with artificial intelligence. Readers could, in theory, know what they were looking at. In practice, critics argued the disclosure was in fine print, and while it was present, it was not prominent. Once again, the backlash was unexpected. Readers canceled subscriptions. The creative industry pushed back hard on what it characterized as AI displacement of human models, photographers, and stylists. The reputational damage was real, even though legal exposure was reduced.
The Vogue/Guess situation illustrates an important principle that disclosure done lightly is still better than no disclosure, but it is not the same as disclosure done right. Clear and conspicuous is a legal standard, not a design suggestion, and consumers let the industry know that fine print does not meet that standard when using AI in this way.
Coca-Cola ran an AI-generated holiday campaign for the last two consecutive years, both times disclosing the AI production in press materials, brand statements, and, in the case of the 2025 holiday spot, prominently in the first frame of the video. The company's global head of generative AI and its chief marketing officer spoke openly about the tools, the process, and the intentions behind the work. Coca-Cola did not hide the AI; it leaned into it. The creative itself drew significant backlash, with critics calling both versions soulless, cheap, and a betrayal of the brand's emotional heritage. Social media sentiment decreased sharply after the 2025 advertisement dropped. The company's own testing told a different story, with the ad scoring off the charts with general consumers and ranked among Coca-Cola's top-tested ads in history. The haters, as one Coca-Cola executive put it, were loudest online but not representative of the mass audience.
Whether you love or loathe the creative, Coca-Cola modeled a disclosure posture that other brands should study. It disclosed proactively, consistently, and in detail. It did not use AI secretly. The lesson in all of this is not that AI-generated advertising is universally good or bad. The lesson is that transparency builds a legally and reputationally defensible position.
Product Claims: AI Washing
While brands were busy using AI without disclosing it, some companies have done the opposite, claiming their AI products had capabilities they did not have. The FTC calls this "AI washing," and it has become one of the agency's most active and aggressive bipartisan enforcement priorities.
The FTC continues to actively enforce AI-washing after launching "Operation AI Comply" in September 2024 and taking enforcement action against five companies in a single announcement. By August 2025, Air AI, which marketed a conversational AI tool it claimed could replace human sales representatives entirely, had become the agency's twelfth AI-washing target since 2024 and the fourth in 2025 alone. Certain AI companies promise passive income, autonomous business operation, or AI capabilities that far exceed what their products actually deliver. Air AI told businesses its product could autonomously handle customer service and sales calls with no human oversight, generating substantial profits. The FTC alleged that the product could do none of that reliably and estimated up to $250,000 in damages per affected business. DoNotPay was similarly fined $193,000 for marketing a robot lawyer that was not remotely capable of what it promised. Workado was ordered to stop advertising the accuracy of its AI content-detection tool. IntelliVision settled over misleading claims about its AI facial recognition software. The pattern is clear: if you claim your AI can do something it cannot reliably do, the FTC is coming.
The Genspark ad is a nuanced example of AI washing. AI productivity tools are powerful, but they hallucinate. They make errors. They require human review. When evaluating ads, the FTC evaluates the overall net impression of an advertisement, and not just whether individual sentences are technically accurate. An ad that shows AI taking over your work, with no caveat about accuracy or the need for human oversight, creates a net impression that the product cannot reliably support. There is no specific federal rule today requiring a hallucination disclaimer in a TV commercial. But an ad that implies AI can do your work without error, when that implication is both material and false, is exactly the kind of claim the FTC has been signaling it will pursue. A single on-screen line, "AI suggestions may be inaccurate; human review recommended," would have been both honest and protective. Instead, Ferris got the day off, and the legal exposure came with him.
When AI Calls
If you thought AI disclosure obligations were complex in advertising, wait until you see the telephone context. The rules are evolving rapidly, enforcement is active, and the penalties are per-violation, meaning that one noncompliant AI calling campaign can generate liability at scale.
In February 2024, the FCC issued a unanimous Declaratory Ruling confirming that AI-generated voices fall within the TCPA's prohibition on 'artificial or prerecorded voice.' This is settled law. If your company uses any AI voice technology to initiate an outbound telephone call, you are operating under the TCPA's full consent and disclosure requirements, no different from a traditional robocall.
What that means in practice, under current law, for every AI voice call your company initiates:
- Caller must identify themselves at the beginning of the call.
- Callers must provide their telephone number during the call (or the seller's phone number).
- If the call is advertising or telemarketing, an opt-out mechanism must be offered within two seconds of those required disclosures.
- Prior express written consent from the called party is required before the call is placed.
In July 2024, the FCC issued a Notice of Proposed Rulemaking proposing even stricter requirements specifically targeting AI voice calls. While not yet final, these proposals signal the regulatory direction with clarity:
- At the beginning of each AI voice call, the caller must clearly disclose that the call uses AI-generated voice technology.
- Consent forms must explicitly state that calls or texts will use AI-generated content, and general robocall consent is not sufficient.
- Consent for AI calls must be obtained separately from consent for human-made calls.
- AI disclosures in consent language must be in plain language and prominently placed, and not buried in terms of service.
Smart practice today is to update consent language to explicitly reference AI voice, even before the final rule is issued. The directional signal is unambiguous. Forward-looking compliance is far less expensive than remediation.
Voice Cloning: A Separate and Higher-Risk Category
Using AI to clone a real person's voice, a celebrity spokesperson, an executive, or a public figure, is a different and substantially higher-risk matter. It implicates right-of-publicity laws in addition to the TCPA, and in multiple states, the unauthorized cloning of a recognizable voice is now explicitly illegal. This is not merely a disclosure issue. It is a prior consent and rights clearance issue that must be resolved in contracts with talent before production begins. For example, the January 2024 robocall impersonating President Biden's voice, using AI, was designed to suppress voter turnout in the New Hampshire primary and resulted in a proposed $6 million FCC fine and criminal referrals to state and federal law enforcement. Voice cloning without consent is not a regulatory gray area. It is fraud.
State Laws Are Accelerating
Texas SB 140 requires that callers using AI-generated voice technology disclose that fact within the first 30 seconds of the call, obtain enhanced, specific consent that explicitly references AI, and prohibits AI voice cloning of real individuals without consent — with a private right of action and statutory damages for each violation. California's AB 2905, effective January 1, 2025, requires callers using an automatic dialing device to inform recipients when the prerecorded message uses a voice generated or significantly altered by AI, with penalties up to $500 per violation. Michigan's AI Political Disclaimer Law (HB 5141) adds disclosure requirements for AI-generated political phone communications. Additional states have pending or recently enacted legislation adding requirements. The patchwork is growing, and multi-state compliance planning is no longer optional for any company running AI-assisted calling programs.
Practical Takeaways for Businesses
With this area of law changing daily, businesses may be confused about compliance. There are some things that businesses can do now to get ahead of the curve.
1. Update Your Brand Guidelines
Your brand guide governs how your company presents itself visually and verbally. It should now include an AI policy covering (i) when AI may be used in creative production; (ii) what disclosure language is required; (iii) and how to document AI tool usage throughout the creative workflow. The goal is not to prohibit AI, but to ensure its use is tracked, disclosed appropriately, and legally defensible.
2. Establish Disclosure Standards by Channel
- Television and streaming. On-screen notation if AI-generated or significantly altered ad content; verbal disclosure if an AI voice is used.
- Print and digital: Label AI-generated models, images, and scenes clearly and conspicuously, and not in fine print.
- Social media. Caption credits, such as "AI-generated imagery," should be at the beginning of the caption, not buried at the end.
- Website chatbots. Ensure the first message identifies that AI is being used, adopt a persistent UI label, and link to your privacy notice.
- AI-generated testimonials or endorsements. Disclose AI nature clearly and ensure the underlying claim reflects actual consumer experience.
3. Train Marketing, Business Development, and Product Teams
Legal compliance in AI advertising requires more than policies. It requires that the people generating content, approving campaigns, and deploying tools understand the rules. Training should cover (i) the FTC's net impression standard and what it means for implied claims; (ii) state-specific requirements for the markets you operate in; the difference between AI-generated content and an ad about AI; (iii) the specific prohibited conduct under New York's synthetic performer law; (iv) TCPA consent and disclosure requirements for any AI-assisted calling programs; and (v) the risk of AI washing what your AI-powered product can do. This training is not a one-time event. The law is moving. Build a cadence of quarterly updates for relevant teams.
4. Watch for Implied Claims in Productivity and Capability Advertising
If your company sells or markets an AI-powered product that portrays an AI capability, evaluate whether the product's actual performance substantiates the net impression created by that portrayal. The claim that AI does your work for you is a capability claim. If the product hallucinates, requires human oversight, or fails in foreseeable ways on the tasks shown, consider adding a limitation disclosure. The FTC is watching, and more critically, so are your customers.
5. Do Not Neglect Right-of-Publicity and Copyright Clearances
Before your AI image generation tool produces a model for your next campaign, ensure you have conducted a review for unintended resemblance to real individuals, especially public figures. Before your AI voice tool synthesizes a spokesperson, confirm you have the rights. These are not hypothetical risks. Tennessee's ELVIS Act, California's likeness laws, and New York's right-of-publicity statutes all explicitly protect individuals from unauthorized AI replication of their voice or likeness. Litigation is active. Your vendor contracts should specifically address who owns the AI-generated outputs and what liability the vendor assumes for infringement arising from training data.
Final Thought
The Super Bowl this year was a snapshot of where AI and advertising currently stand. AI can write copy. AI can generate images. AI can produce a 30-second commercial, a print campaign, a social media rollout, and a media buy recommendation. What AI cannot do is provide legal advice, assess your specific exposure under applicable laws like New York's synthetic performer statute, evaluate whether your FCC consent forms cover your new AI voice agent, or tell you whether your Genspark-style productivity claims cross the FTC's net-impression threshold.
Every AI-assisted advertising campaign, print, digital, broadcast, or social, should be reviewed by a human lawyer before it runs. Not after the backlash. Not after the regulatory inquiry. Before. The law in this space is unfolding quickly. A proactive legal review is not a cost center. It is risk management, and the direction for companies is clear: disclose early, disclose conspicuously, and have a human lawyer review the work before the cameras roll.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]