ARTICLE
8 October 2024

California Gov. Newsom Vetoes AI Safety Act, Future Attempts To Regulate AI Will Likely Differ

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
On Sunday, Sept. 29, California Gov. Gavin Newsom vetoed California's most comprehensive attempt yet to regulate artificial intelligence in the state, the "Safe and Secure Innovation for Frontier Artificial...
United States California Privacy

On Sunday, Sept. 29, California Gov. Gavin Newsom vetoed California's most comprehensive attempt yet to regulate artificial intelligence in the state, the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (SB 1047), also referred to as the AI Safety Act.

A Different Approach Suggested

In his message announcing that he was returning the AI Safety Act to the California Senate without his signature, Newsom acknowledged that he had received — and signed — "several thoughtful proposals to regulate AI companies" to address risks such as the spread of misinformation, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce.

The AI Safety Act focused on the largest and most expensive AI models. Gov. Newsome acknowledged this limitation and noted that the AI Safety Act could create a "false sense of security."

Interestingly, without specifically stating so, Newsom's message signaled that it may be preferable to more closely align the state's comprehensive AI regulation efforts to those in Europe under the EU AI Act, which takes a different approach that we identified in our first client alert on the AI Safety Act. The EU AI Act classifies AI systems according to risk, with a particular focus on "high-risk" systems. To that end, Newsom noted that the AI Safety Act "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."

AI Regulation and California's Impact Will Continue

California likely has not ceded its position as a leader in the regulation of emerging technologies. For example, California enacted the AI Transparency Act, which requires certain developers of AI systems to provide AI watermarking capabilities and AI detection tools, as well as other AI legislation to protect against unauthorized use of an individual's digital likeness.

By taking a more focused approach, California has perhaps indicated that narrow regulation of AI in the United States will be the norm. It may also signal that future attempts at more ambitious comprehensive AI regulation — in California, in another state, and at the federal level — will focus on the risks and impacts of AI systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More