ARTICLE
30 July 2025

California's AI Law Has Set Rules For Generative AI—Are You Ready

SS
Seyfarth Shaw LLP

Contributor

With more than 975 lawyers across 18 offices, Seyfarth Shaw LLP provides advisory, litigation, and transactional legal services to clients worldwide. Our high-caliber legal representation and advanced delivery capabilities allow us to take on our clients’ unique challenges and opportunities-no matter the scale or complexity. Whether navigating complex litigation, negotiating transformational deals, or advising on cross-border projects, our attorneys achieve exceptional legal outcomes. Our drive for excellence leads us to seek out better ways to work with our clients and each other. We have been first-to-market on many legal service delivery innovations-and we continue to break new ground with our clients every day. This long history of excellence and innovation has created a culture with a sense of purpose and belonging for all. In turn, our culture drives our commitment to the growth of our clients, the diversity of our people, and the resilience of our workforce.
Starting January 1, 2026, California's AI Transparency Act (SB 942) goes into effect, marking the first law in the U.S. to require built-in disclosures and detection tools for generative AI content. Do not panic (yet).
United States Technology

Starting January 1, 2026, California's AI Transparency Act (SB 942) goes into effect, marking the first law in the U.S. to require built-in disclosures and detection tools for generative AI content. Do not panic (yet). This law does not apply to every system out there. In fact, many companies may be surprised to find they're completely outside the AI Transparency Act's reach. That said, January is coming up faster than you think. If you think this law might apply to you, now's the time to start sorting that out.

Here's the bottom line: SB 942 only applies if your company builds a generative AI system that creates audio, video, or image content (in other words, multimedia), makes that system publicly accessible in California, and has over one million monthly users or visitors. Miss any one of those criteria, and the law does not apply to you. It also does not matter where most of your users are located; what matters is that the system is accessible from California. So even if your user base is largely outside the state, if Californians can reach your system and you're over the user threshold, you're potentially on the hook. There's no carve-out for accidental exposure or minimal in-state usage. If it's online and accessible to Californians, the law could apply.

The law is focused on multimedia content because that's where the biggest risks of deception lie. If you provide AI tools that generate synthetic voices, deepfake-style videos, hyper-realistic artwork, or virtual avatars, you're likely in the crosshairs, especially if you created, coded, or produced the system. Even if you're licensing the platform to others, the law may still apply. On the other hand, if you're simply offering access to a generative AI system developed by someone else, you may not be covered. But if you modify, fine-tune, or rebrand that third-party system and make it publicly accessible in California, you could still fall within scope. Responsibility often depends on how much control or customization you have over the AI system and whether you're effectively acting as a provider.

If your system only generates text, however, you're not covered under SB 942, regardless of how many users you have. This includes chatbots, email-writing assistants, legal brief generators, article summarizers, language translators, recommendation engines, search tools, and even streaming platforms or games that use pre-scripted content. If your AI talks but doesn't draw, animate, or sing, you're in the clear, at least for now.

For companies that are covered, the law comes with a specific set of requirements. You'll need to offer a free, publicly available tool that allows users to check whether an image, video, or audio clip was generated by your system. Your content must support two forms of disclosure: a visible label that users can opt to include, and a hidden watermark that your system must embed automatically. That watermark needs to include your company's name, details about the AI system used, a timestamp of when the content was created, and a unique identifier.

There's also a licensing catch. If you license your generative AI system to others, you must include terms in the contract that require them to preserve the watermarking features. If they remove or tamper with those features, you are legally required to revoke the license within 96 hours.

Unlike some privacy laws out there, the AI Transparency Act has some teeth. Violations can result in civil penalties of up to $5,000 per day, per violation. Enforcement power rests with the California Attorney General and local prosecutors. Individuals cannot bring lawsuits under the law because there's no private right of action. However, if the state sees noncompliance, it has the power to step in.

So what's the takeaway? If your AI generates images, audio, or video and has a big user base in California, it's time to think about compliance. But if your system sticks strictly to text, you're off the hook, at least under this law. That said, the regulatory winds are shifting fast. SB 942 might be the first, but it won't be the last. Expect other states, and possibly federal lawmakers (doubt it), to follow suit with their own rules.

If you're unsure whether your system falls under SB 942, or if you want to get ahead of what's coming, now's the time to ask the hard questions. Better to prepare early than scramble later.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More