- within Consumer Protection, Insolvency/Bankruptcy/Re-Structuring and Insurance topic(s)
On September 29, 2025, California enacted into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), a sweeping legislative framework aimed at enhancing transparency, safety, and accountability in the development and deployment of advanced artificial intelligence (AI) models by developers. The law introduces new obligations for AI developers, establishes incident reporting mechanisms, creates research initiatives, and provides robust whistleblower protections. The TFAIA is designed to address the unique risks posed by highly capable AI systems, particularly those with the potential to cause catastrophic harm, defined in the TFAIA as foreseeable and material risks that could result in the death or serious injury of more than 50 people, or over $1 billion dollars in property damage, arising from high-risk uses, such as expert-level assistance in the creation or release of chemical, biological, radiological, or nuclear weapons, autonomous cyberattacks or criminal conduct, or the loss of control over advanced AI models.
The passage of TFAIA marks the culmination of a multi-year legislative effort to regulate AI in California. This is the second draft of the bill, coming after Governor Newsom vetoed the initial version, SB 1047, in 2024, citing concerns that it was overly restrictive and might hinder AI innovation in California. SB 1047 would have required all AI developers, particularly those working on models with training costs of $100 million dollars or more, to assess specific risks. After the veto, Governor Newsom tasked AI researchers with developing an alternative approach, which resulted in a comprehensive 52-page report.1 The recommendations from this report formed the basis of the TFAIA.
Scope and applicability
The TFAIA applies to "frontier developers", entities that train or initiate the training of "frontier models," defined as foundation AI models trained using more than 10^26 computational operations. The law imposes additional requirements on "large frontier developers," which are those with annual gross revenues exceeding $500 million. By focusing on the most advanced and well-resourced AI developers, the TFAIA aims to address the greatest potential risks posed by frontier models while minimizing regulatory burdens on smaller or less advanced companies.
Key requirements
The TFAIA establishes a framework for large frontier AI developers, focusing on transparency, risk management, and public accountability. The following key obligations are designed to ensure the safe and responsible development and deployment of advanced AI systems.
Frontier AI frameworks
Large frontier developers are required to draft, implement, and publicly post a "frontier AI framework" detailing their approach to managing catastrophic risks associated with their AI models. The framework must address:
- the incorporation of national and international standards and industry best practices;
- thresholds and assessment methodologies for identifying catastrophic risks;
- mitigation strategies and review processes prior to deployment or extensive internal use;
- use of third-party evaluators for risk assessment;
- cybersecurity measures to protect unreleased model weights; and
- internal governance and incident response protocols.
Frameworks must be reviewed and updated at least annually, with material modifications published within 30 days along with justifications that clearly describe the nature of the changes, the reasons for the updates, and how the modifications address the management, assessment, or mitigation of catastrophic risks.
Transparency and reporting
Prior to or concurrent with the deployment of a new or substantially modified frontier model, developers must publish a transparency report disclosing the:
- developer website and a method of communication with the developer;
- model release date;
- supported languages and modalities of output; and
- intended uses and general usage restrictions.
Large frontier developers must also disclose:
- summaries of catastrophic risk assessments and mitigation steps; and
- the involvement of third-party evaluators.
Large frontier developers must also submit quarterly (or otherwise scheduled) summaries of internal catastrophic risk assessments to the Office of Emergency Services (OES).
Incident reporting
The OES is tasked with establishing mechanisms for both public and developer reporting of "critical safety incidents," which include unauthorized access to model weights resulting in harm, materialization of catastrophic risks, loss of model control, or deceptive model behavior that increases catastrophic risk. Frontier developers must report such incidents within 15 days of discovery, or within 24 hours if there is an imminent risk of death or serious injury. Frontier developers may redact information in published reports as necessary to protect trade secrets, cybersecurity, public safety, or national security, and must describe the character and justification of any redactions in the published version to the extent permitted by the concerns that justify the redaction.
Whistleblower protections
The Act introduces strong whistleblower protections for employees ("covered employees") responsible for risk assessment or management. Frontier developers are prohibited from retaliating against employees who disclose information about specific and substantial dangers to public health or safety, or violations of the TFAIA, to authorities or internal investigators. Large frontier developers must provide an anonymous internal reporting process and regular updates to whistleblowers. Successful plaintiffs in whistleblower actions are entitled to attorney's fees, and courts may grant injunctive relief to prevent retaliation.
CalCompute initiative
The legislation establishes a consortium within the Government Operations Agency to develop a framework for "CalCompute," a public cloud computing cluster intended to support safe, ethical, and equitable AI research and innovation. The consortium, comprising representatives from academia, labor, public interest groups, and technical experts, is tasked with submitting a comprehensive report to the California legislature by January 1, 2027. The initiative is contingent on budgetary appropriation.
Enforcement and penalties
Noncompliance with the TFAIA, including failure to publish required documents, false or misleading statements regarding catastrophic risk, or failure to report incidents, may result in civil penalties of up to $1 million per violation, enforceable exclusively by the Attorney General. The TFAIA preempts local regulations adopted after January 1, 2025, that specifically address the management of catastrophic risk by frontier developers. It does not apply where preempted by federal law or where it conflicts with federal contracts.
Looking ahead
The TFAIA represents a significant step in California's approach to AI governance, focusing on transparency, risk mitigation, and public accountability by requiring large frontier AI developers to publicly disclose their risk management frameworks, establishing CalCompute to foster safe and equitable AI innovation, creating mechanisms for reporting critical safety incidents, protecting whistleblowers, imposing civil penalties for noncompliance, and mandating regular updates to the law based on technological and stakeholder input. The law anticipates future developments in AI capabilities and provides mechanisms for periodic review and adjustment of key definitions and thresholds.
Companies developing foundation AI models in California should determine if they are frontier or large frontier developers that are subject to the law. These developers will need to implement policies and procedures to ensure compliance with the law, including those that implement the transparency requirements and reporting, and those that establish procedures to track critical safety incidents and reporting.
Footnote
1 The California Report on Frontier AI Policy. Joint California Policy Working Group on AI Frontier Models (June 17, 2025).
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.