ARTICLE
24 November 2025

Client Alert: California Continues To Lead On AI With New Legislation And Enforcement Steps

JB
Jenner & Block

Contributor

Jenner & Block is a law firm of international reach with more than 500 lawyers in six offices. Our firm has been widely recognized for producing outstanding results in corporate transactions and securing significant litigation victories from the trial level through the United States Supreme Court.
California Attorney General Rob Bonta recently announced his office's plans to hire an AI expert and "investigative technologists" to assist with an expected increased focus...
Western Sahara California Technology
Jenner & Block are most popular:
  • within Insolvency/Bankruptcy/Re-Structuring and Strategy topic(s)

Introduction

California Attorney General Rob Bonta recently announced his office's plans to hire an AI expert and "investigative technologists" to assist with an expected increased focus on enforcing the raft of recent AI laws passed in California. This announcement will not come as a surprise to anyone who has been following the California Legislature's efforts to pass various AI laws and empower the Attorney General to enforce them. Indeed, the close of California's 2025 legislative session has positioned the state as a leader for AI governance. With the enactment of California S.B. 53—the nation's first law regulating frontier AI model developers—California has moved beyond addressing discrete AI issues to pioneering a comprehensive framework for transparency, safety, and public infrastructure. Combined with other AI legislation enacted in California this year and last, California's leadership on AI—alongside what we anticipate will be enforcement efforts from Attorney General Bonta and his office—will likely shape jurisdictions nationwide.

AI companies should take note: compliance dates for enacted AI legislation in California range from January 2026 through 2028, demanding near-term compliance planning. Meanwhile, the California Kids AI Safety Act ballot initiative expected to qualify for November 2026 packages together more aggressive regulatory requirements that have failed to pass the legislature or receive the Governor's signature. If passed, that initiative would redefine the term "companion chatbot," prohibit companion chatbots for children in specified cases, ban the sale of children's data, and provide for a private right of action. As other states will likely look to follow California's lead—both in passing legislation and staffing Attorneys General offices with individuals with expertise to enforce that legislation—companies across the country should consider whether their products and practices fall within the scope of current and anticipated regulations.

I. California's Past Leadership on AI

California enacted multiple important bills in 2024, including:

  • A.B. 2013, which requires AI developers to publicly disclose information on model training data and goes into effect on January 1, 2026.
  • S.B. 942 (as modified by A.B. 853 (2025)), which establishes a "content provenance" framework to improve the traceability of AI-generated material. The measure directs developers to apply "latent" labels on image, video, or audio generated by AI models, and will gradually take effect beginning in 2026.
  • A.B. 2602 and A.B. 1836, or "digital replica" laws, which target the unauthorized use of an individual's digital likeness and deepfakes. These laws expand civil remedies for individuals whose voices, images, or personal attributes are replicated without consent.

Also in 2024, Governor Gavin Newsom vetoed S.B. 1047 and instituted the California AI Policy Working Group in its place. The high-profile bill would have imposed safety and testing obligations on developers of large-scale AI models. Governor Newsom's veto message emphasized the importance of "get[ting] this right" and instructed the working group to study AI governance frameworks and recommend best practices. In carrying out its mandate, the working group released a report in March 2025 that became the basis of S.B. 53, California's landmark frontier model regulation bill.

II. California's Landmark 2025 Legislative Session

In 2025, California built on its earlier success by passing numerous additional AI bills in the following categories:

Frontier Model Regulations: The centerpiece of this session is S.B. 53 ("Transparency in Frontier Artificial Intelligence Act"). Building on recommendations from California's first-in-the-nation report, S.B. 53 makes California the first state to directly regulate frontier AI developers. Unlike risk-based approaches that affect downstream deployment, S.B. 53 centers on the development stage: the legislation requires publication of detailed safety frameworks and transparency reports; establishment of a CalCompute public computing cluster; mandatory reporting of critical safety incidents; and robust whistleblower protections. Most requirements take effect January 1, 2026, with non-compliance carrying penalties of up to $1 million per violation.

Content Provenance and Other Transparency Measures: Content provenance (watermarking) bills typically require the inclusion of digital watermarks or content provenance information to enable users to identify when audio or visual content is AI-generated and distinguish it from "authentic" content. A.B. 853 amends S.B. 942 and imposes requirements on "large online platforms" and "capture device manufacturers" in addition to AI systems that generate photos, video, or audio content. S.B. 683 addresses publicity rights by imposing civil liability for knowingly using a person's digital likeness for commercial purposes without consent. A.B. 621 strengthens civil liability for platforms that aid and abet deepfake pornography.

Chatbots and Consumer Safety: California also passedlegislation focused on vulnerable populations. S.B. 243—signed in place of the vetoed A.B. 1064—requires operators to make certain disclosures and issue reports on companion chatbots, as well as implement protocols for preventing self-harm. The bill's scope significantly narrowed during the legislative process to focus specifically on companion chatbots. A.B. 489 prevents AI chatbots from misrepresenting themselves as licensed medical professionals.

During the 2025 legislative session, several notable bills were hotly contested in the legislature and ultimately vetoed. S.B. 7 would have imposed notice and disclosure requirements where AI systems are used for employment-related decisions and prohibited AI use without human review, but Governor Newsom's veto message cited "overly broad restrictions" and "unfocused notification requirements." S.B. 11 would have required consumer warnings for products capable of creating unauthorized digital replicas, but Governor Newsom questioned whether warnings would "be sufficient to dissuade wrongdoers." A.B. 1064 would have prohibited certain kinds of companion chatbots for minors, but Governor Newsom noted it is "imperative that adolescents learn how to safely interact with AI systems."

These laws indicate that California is broadly seeking to sculpt an ecosystem of safe, transparent, and innovative AI development.

III. Setting the Stage for 2026

California's AI regulatory momentum shows no signs of slowing. The state is poised for an even more consequential year ahead, with ballot initiatives and new legislation primed to reshape the AI compliance landscape nationwide.

The most significant development is the California Kids AI Safety Act ballot initiative, submitted to the Attorney General's Office on October 22. The vetoed A.B. 1064 provides much of the basis for the Kids AI Safety Act—specifically its prohibition on making chatbots with dangerous capabilities (i.e., self-harm, erotic content, and illegal content) available to kids. The Kids AI Safety Act includes more ambitious provisions as well, such as independent safety audits (which the legislature had stripped out of S.B. 53), prohibiting internet-enabled devices in schools, and barring the sale of children's data.

The initiative is expected to qualify for the November 2026 ballot and has strong political backing and public support. While the initiative could be withdrawn before the ballot if the California legislature passes compromise legislation in 2026, a compromise may be challenging, given the proponents' ambitious scope and Governor Newsom's prior veto.

During California's 2026 legislative session, lawmakers are expected to propose amendments to A.B. 853 to address technical feasibility and privacy concerns with content provenance requirements. Bills targeting employment AI systems, biometric data, and automated decision-making in high-stakes contexts are also anticipated.

In addition, many of the bills that were held or vetoed are likely to return in some form or another in 2026. These includes S.B. 7 (regulating employers' use of AI), A.B. 1018 (monitoring automated decision systems), and A.B. 412 (requiring a database of works used to train models), among others.

State AI regulation is expected to accelerate in the 2026 sessions, particularly in majority-Democratic states where potential deregulatory efforts may only galvanize additional action. Colorado's AI Act, which takes full effect in June 2026, has been a model for other states seeking to enact comprehensive AI regulations, although Colorado's law remains controversial and the state has currently convened another working group to consider amending it. Multiple states are considering chatbot safety legislation for their 2026 sessions. In sum, the regulatory landscape remains fluid as jurisdictions balance innovation incentives against consumer protection and interstate consistency—with California continuing to serve as both testing ground and trendsetter for AI governance nationwide.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More