ARTICLE
6 January 2026

New York Updates AI Disclosure Law

RJ
Roth Jackson

Contributor

Roth Jackson and Marashlian & Donahue’s strategic alliance delivers premier regulatory, litigation,and transactional counsel in telecommunications, privacy, and AI—guiding global technology innovators with forward-thinking strategies that anticipate risk, support growth, and navigate complex government investigations and litigation challenges.
On December 11, 2025, Kathy Hochul signed into law landmark legislation requiring that advertisers disclose when their ads use AI-generated "synthetic performers."
United States New York Technology
Roth Jackson are most popular:
  • within Technology and Compliance topic(s)

On December 11, 2025, Kathy Hochul signed into law landmark legislation requiring that advertisers disclose when their ads use AI-generated "synthetic performers."

The law (Senate Bill S.8420-A / Assembly A.8887-B) amends New York's General Business Law to mandate a clear, conspicuous disclosure whenever a commercial advertisement contains a "synthetic performer" — defined as a digitally created, AI-generated person who is not a real identifiable actor.
The disclosure requirement applies to most commercial advertising (products, services, digital/social ads, etc.), though the law excludes certain "expressive works" (e.g., film/TV/movie promos) when the AI use aligns with the content's nature. The law also expands post-mortem rights of publicity: commercial use of a deceased individual's likeness, voice, or image now requires prior consent by heirs/executors.

Penalties for noncompliance are steep with a civil fine of $1,000 for a first violation, and $5,000 for subsequent violations.
The bottom line is that if a brand or ad agency uses an AI-generated "avatar" or digital persona to sell something, and that ad reaches New York consumers (or otherwise falls under New York law), they now must label it as AI-generated — or risk fines.

What are the implications for the advertising industry as a whole?

Transparency Obligations. The law becomes effective 180 days after signing so as of July 2026, brands who use synthetic performers should audit their creative workflow and ad inventories to ensure compliance. For national advertisers — especially those running ads that that reach New York — this means a global compliance check: even if an ad isn't specifically targeted only to New York, if there's any chance the work will be seen in New York, it's safe to assume that the disclosure should be included. Alternatively, avoid AI avatars entirely.

Adaptive Creative Strategy. Because of the disclosure requirement, the "mystique" of a photorealistic AI model may backfire. Consumers may view "This ad uses AI-generated people" as a trust signal — or a warning. On the other hand, use of actual humans becomes more appealing, which may emphasize authenticity and trust with a brand. Budgets should be adjusted to reflect in person shoots, hybrid campaigns or any combination thereof. The law also aligns with labor and rights-of-publicity interests (backed by SAG‑AFTRA).

Workflow Checks. Here comes legal as the official buzzkill of the ad world. Ad agencies and brand legal/compliance teams must now include disclosure checks in the creative workflow. Any time a campaign uses AI generated people there must be a compliance flag, disclosure copy, and final review before distribution.

For national campaigns, agencies will need to track For cross-state or national campaigns: agencies will need to track where ads run (targeting/delivery footprint) to know whether NY law applies — or simply apply disclosure universally to avoid risk. Given the penalties, even a single noncompliant ad could cost more than compliance — so it's likely that many advertisers may go with global disclosures rather than risk selective blocking or geo-fencing.

What are the strategic and legal risks for advertisers?

Reputation. If a major brand tries to hide the use of AI avatars and gets called out (or fined), the negative publicity could overshadow any cost savings from using synthetic performers.

Rights of Publicity Exposure. The companion law expanding post-mortem rights means unauthorized digital replicas of deceased people could open advertisers to claims from estates — a legal minefield.

Swimming Upstream. While the Executive Branch is pushing for a uniform national AI framework that might preempt state laws, enforcement is uncertain and legal challenges are likely. The choice then becomes comply now, only use human talent, or risk a complicated and costly patchwork of state laws.

What can you do now to prepare?

  • Audit all ongoing and planned ad campaigns and tag any that use AI generated faces or synthetic performers.
  • Update compliance workflows and the extent to which AI generated people are used like any other aspect of the workflow which need to meet specific requirements.
  • Design disclosure copy. This can be short but clear and should be used universally.
  • Reexamine your creative strategy to weigh the cost/benefits of synthetic performers vs. human talent.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More