ARTICLE
9 October 2025

California Passes Broad Safety And Transparency Law For 'Frontier' AI Developers

KL
Herbert Smith Freehills Kramer LLP

Contributor

Herbert Smith Freehills Kramer is a world-leading global law firm, where our ambition is to help you achieve your goals. Exceptional client service and the pursuit of excellence are at our core. We invest in and care about our client relationships, which is why so many are longstanding. We enjoy breaking new ground, as we have for over 170 years. As a fully integrated transatlantic and transpacific firm, we are where you need us to be. Our footprint is extensive and committed across the world’s largest markets, key financial centres and major growth hubs. At our best tackling complexity and navigating change, we work alongside you on demanding litigation, exacting regulatory work and complex public and private market transactions. We are recognised as leading in these areas. We are immersed in the sectors and challenges that impact you. We are recognised as standing apart in energy, infrastructure and resources. And we’re focused on areas of growth that affect every business across the world.
Gov. Gavin Newsom signed California's long-awaited Transparency in Frontier Artificial Intelligence Act (TFAIA) on September 29, establishing safety and transparency requirements for developers of large AI models.
United States Technology
Herbert Smith Freehills Kramer LLP are most popular:
  • within Technology, Environment and Coronavirus (COVID-19) topic(s)
  • with Inhouse Counsel
  • in United States

Gov. Gavin Newsom signed California's long-awaited Transparency in Frontier Artificial Intelligence Act (TFAIA) on September 29, establishing safety and transparency requirements for developers of large AI models. The TFAIA is the California Legislature's second attempt at a sweeping AI safety law, after Newsom vetoed a more expansive version last year in response to backlash from Silicon Valley and other tech companies. The TFAIA builds on a report by leading experts outlining AI safety guardrails, which Newsom convened following his veto of last year's SB 1047.

The TFAIA was narrowed from SB 1047 to target developers (rather than deployers) of "frontier" AI foundational models, defined as those that are trained using a quantity of computing power greater than 1026 integer or floating-point operations (FLOPs). This threshold includes computing power used to fine-tune the models or make other post-training adjustments. By contrast, the EU AI Act applies to models using computing power greater than 1025 FLOPs.

The TFAIA also distinguishes between frontier developers whose models meet the computing power thresholds and "large" frontier developers, defined as those whose gross annual revenues also exceed $500 million. All frontier developers, regardless of revenue, must comply with certain transparency and whistleblower-protection requirements, while large frontier developers are bound by additional safety and risk analysis requirements.

Although Colorado, Texas, and Utah have also passed broad AI laws, and a handful of other states (including California) already regulate specific AI uses, the TFAIA establishes first-of-its-kind safety requirements, including:

  • Creating Frontier AI Frameworks. A large frontier developer must draft, implement, and publish on its public website an overarching "AI framework" directed at safety measures and update that framework annually or whenever material changes are made. The AI framework must address:
    • Industry Standards. How the developer incorporates national standards, international standards, and industry-consensus best practices into its frontier AI framework. Although the TFAIA does not identify such standards, the National Institute of Standards and Technology published the AI Risk Management Framework in 2024 and the International Organization for Standardization has published similar international guides.
    • Catastrophic Risk. How the developer identifies the potential for catastrophic risks caused by its model, how it mitigates those risks, and any related third-party assessments. The TFAIA defines "catastrophic risk" as "a foreseeable and material risk" that the model will "materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars" in damages arising from the model's contribution to biological or nuclear weapons, criminal conduct, fraud, cyberattacks, or evading the control of the model's developer or user.
    • Cybersecurity and Critical Safety Incidents. How the developer will implement cybersecurity controls and respond to critical safety incidents, which are defined as any unauthorized access to the model weights, or other loss of control of the frontier model, that results in death or bodily injury or a material increase of catastrophic risk.
    • Governance Practices. How the developer applies internal governance practices to prevent critical safety incidents, mitigate catastrophic risks, and ensure the frontier model does not circumvent human oversight mechanisms.
  • Transparency Reports. All frontier developers must publish a "transparency report" on their public website, before or concurrently with deploying a new frontier model, that describes the release date of the model, the intended uses of the model, the languages it supports, its output modalities, and a method by which a person may contact the developer. Large frontier developers must also publish in that report a summary of the AI framework described above, including a summary of risk assessments, the third parties involved in such assessments, and how the developer responded.
  • Emergency Government Reports. A large frontier developer must transmit "a summary of any assessment of catastrophic risk . . . resulting from internal use of its frontier models" and report any "critical safety incident" to the California Office of Emergency Services within 15 days of discovery. If the critical safety incident presents an imminent risk of death or serious injury, the developer must notify public safety authorities within 24 hours.
  • Whistleblower Protections. All frontier developers are prohibited from discouraging or retaliating against employees who report violations of the TFAIA. Large frontier developers must "provide a certain internal process through which [an] employee may anonymously disclose information" to the developer regarding activities that "present a specific and substantial danger to the public health or safety resulting from a catastrophic risk" or any violations of the TFAIA.

The new law also calls for the creation of a consortium within the Government Operations Agency to "develop a framework for the creation of a public cloud computing cluster to be known as 'CalCompute' that advances the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable" by fostering research and innovation that benefits the public.

While Newsom stressed that the TFAIA strikes the right balance between fostering innovation and protecting our communities, it remains to be seen whether the TFAIA will set the standard for AI regulation in the US. California is home to 32 of the world's top 50 AI companies, and major players including Anthropic, OpenAI, and Meta publicly backed the bill or applauded its passing. The New York State Legislature is considering a similar bill that has yet to be signed. But Republicans at the federal level still seek to preempt state AI regulation after an attempt at a 10-year moratorium on AI laws failed in the last congressional session.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More