ARTICLE
24 October 2025

California's New Regulations For Developers Of Frontier AI Models: What To Know About The Transparency In Frontier Artificial Intelligence Act

BB
Baker Botts LLP

Contributor

Baker Botts is a leading global law firm. The foundation for our differentiated client support rests on our deep business acumen and technical experience built over decades of focused leadership in our sectors and practices. For more information, please visit bakerbotts.com.
On September 29, 2025, California Governor Gavin Newsom signed into law Senate Bill 53 ("SB 53"), which includes the Transparency in Frontier Artificial Intelligence Act ("TFAIA").
United States California Technology
Baker Botts LLP are most popular:
  • within Litigation, Mediation & Arbitration and Corporate/Commercial Law topic(s)

On September 29, 2025, California Governor Gavin Newsom signed into law Senate Bill 53 ("SB 53"), which includes the Transparency in Frontier Artificial Intelligence Act ("TFAIA"). This new law makes California the first state to enact a statute that specifically addresses frontier artificial intelligence ("AI") development.

The TFAIA is expected to take effect on January 1, 2026. For developers of AI models, the law imposes significant new requirements that demand prompt attention, to avoid civil penalties of up to $1 million per violation.

Background of the Legislation

The Trump Administration has prioritized AI innovation through deregulation, and Congress has never passed any comprehensive federal legislation regulating AI development. Just a few months ago, Congress considered—but ultimately failed to pass—a 10-year ban on most state and local AI laws and regulations. As a result, states currently retain the authority to enact and enforce their own AI-related legislation.

California first attempted to enact an AI development regulatory scheme last year with SB 1047, which was vetoed by Governor Newsom. He concluded that the bill was overly burdensome to AI developers. Governor Newsom then convened the Joint California Policy Working Group on AI Frontier Models, which provided a report setting forth its recommendations on subjects including improving transparency, leading safety practices, and adverse event reporting.

Earlier this year, State Senator Scott Wiener (D-San Francisco) authored a revised piece of legislation, SB 53, which was designed in part to reflect that Working Group's recommendations and to address concerns about the prior iteration of the proposed law. The new bill was narrowed to focus more on post-deployment transparency, safety, and risk management, in contrast to its broader predecessor. Governor Newsom's recent signing of the TFAIA marks the culmination of California's years-long effort to impose safety and transparency regulations on AI development.

Applicability The

TFAIA applies to developers of frontier AI models, which are defined by focusing on the amount of computing power used to train, modify, or fine-tune the model—namely, foundation models trained with a "a quantity of computing power greater than 10^26 integer or floating-point operations ('FLOP')." Although at present, few companies have publicly disclosed that they have exceeded the high technical threshold to be a "frontier model," that number is expected to rapidly increase with time—bringing far more companies into the scope of the TFAIA. A recent analysis projected that there may be around 30 such models that exceed the 10^26 threshold by 2027, and over 200 such models by the beginning of 2030, if current trends continue to hold.

The law differentiates between "frontier developers" and "large frontier developers," with the latter being subject to additional regulatory requirements. "Frontier developers" are defined as entities that "trained or initiated the training" of these high-compute frontier models. "Large frontier developers" are those with annual gross revenues above $500 million in the prior calendar year (including their affiliates).

The TFAIA directs the California Department of Technology to review the statutory definitions of "frontier model," "frontier developer," and "large frontier developer" on an annual basis, and submit recommendations to the Legislature for any desired updates.

Notably, the law does not explicitly limit its applicability to only the frontier developers that are based in California. It is likely that California will seek to impose its new regulatory requirements on companies selling their AI products in the state, in the same way that California applies other laws to businesses with sufficient contacts with the state.

Scope and Requirements

TFAIA imposes four major sets of obligations, some of which apply to all frontier developers, while others only apply to the narrower subset of "large frontier developers."

Publication of AI Framework: Large frontier developers must publish on their websites a "frontier AI framework" that, among other things, documents the developer's cybersecurity practices, alignment with national and international standards and "industry-consensus best practices," governance structures, and procedures to identify and respond to safety incidents. Importantly, the framework must also evaluate whether the model has capabilities that could pose a "catastrophic risk," which is defined as a foreseeable and material risk that a frontier model could materially contribute to the death of or serious injury to more than 50 people, or more than one billion dollars in damage to or loss of property, by: (i) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (ii) engaging in criminal conduct or cyberattacks, without any meaningful human oversight, intervention, or supervision; and (iii) evading the control of its developer or user. The framework must be reviewed and updated on an annual basis.

Publication of Transparency Report: All frontier developers must publish a transparency report before deploying a frontier model. The transparency report must contain information about the model's intended uses, restrictions and conditions on uses, and languages and modalities of output supported by the model, among other information. Additionally, a large frontier developer's transparency report must also provide a summary of assessments of catastrophic risks from the model.

Disclosure of Safety Incidents: Frontier developers must report critical safety incidents involving a frontier model to the Office of Emergency Services ("OES") within 15 days of discovery. However, if a critical safety incident poses "an imminent risk of death or serious physical injury," it must be disclosed within 24 hours to an appropriate authority, such as a law enforcement or public safety agency. A "critical safety incident" includes harm resulting from the materialization of a catastrophic risk, loss of control of a frontier model that results in death or bodily injury, or the model's use of deceptive techniques against the developer to subvert controls or monitoring in a way that demonstrates materially increased catastrophic risk. OES also must establish a mechanism for members of the public to report such incidents.

Protections for Whistleblowers:  The TFAIA also prohibits retaliation against employees or contractors who report catastrophic risks. Employers must provide notice to their employees of their rights and responsibilities under the law and maintain anonymous reporting channels.

Enforcement The TFAIA authorizes California's Attorney General to bring civil actions for violations—which could include failing to report critical incidents, provide the required reports, or comply with their own frameworks—with penalties of up to $1 million per violation, which are to be scaled based on the severity of the offense.

Conclusion

Governor Newsom's signing statement referenced California's "unique opportunity," given its status as the home of many AI companies, researchers, and developers, "to provide a blueprint for well-balanced AI policies beyond our borders." Governor Newsom further acknowledged California's intent to "press[] the federal government to act on national standards."

California's stance thus differs markedly from the Trump Administration's favored deregulatory approach to AI development. Indeed, just a few weeks ago, the White House invited the public to submit comments on "Federal statutes, regulations, agency rules, guidance, forms, and administrative processes" that "unnecessarily hinder" the development or deployment of AI technologies.

Critics of California's law contend that the new compliance requirements and their associated costs may place a heavy burden on AI companies, potentially stifling innovation and development. Moreover, concerns have been raised that if other states follow California's lead and pass their own laws to regulate AI development, AI companies may face an increasingly fragmented regulatory landscape, in which developers must navigate a patchwork of differing state regulations. For example, in New York, the Responsible AI Safety and Education ("RAISE") Act would impose new safety and security protocols, along with reporting and auditing requirements, and is presently waiting for Governor Kathy Hochul's signature. A bevy of new state laws imposing their own requirements on AI development could potentially invite litigation on a variety of grounds, including the burden on interstate commerce.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More