ARTICLE
6 October 2025

California's New AI Laws: What Just Changed For Your Business

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
California just passed comprehensive AI safety legislation, enacting 18 new laws that affect everything from deepfakes to data privacy to hiring practices.
United States California Technology

California just passed comprehensive AI safety legislation, enacting 18 new laws that affect everything from deepfakes to data privacy to hiring practices. If you do business in California — or use AI tools — here's what you need to know now.

The State That Wouldn't Wait

While Washington debates federal AI regulation, California has already written the rulebook. This week, Governor Gavin Newsom signed a sweeping package of 18 AI laws into effect, making California the first US state to establish comprehensive governance over artificial intelligence.

The timing matters. With recent federal efforts to preempt state-level AI regulation now stalled, California's move sets a precedent that other states are already racing to follow. As with their early efforts in the privacy space (through the California Consumer Privacy Act of 2018), California's AI rules are quickly becoming everyone's AI rules.

The Flagship: California's AI Safety Law

The centerpiece of this legislative package is the Transparency in Frontier Artificial Intelligence Act (TFAIA), formerly Senate Bill 53. This landmark law targets the developers of the most powerful AI systems and establishes California as the first state to directly regulate AI safety. It also builds on the recommendations from the Joint California Policy Working Group on AI Frontier Models.

What TFAIA Requires

Developers of "frontier" AI models must now:

  • Publish their safety plans: Companies must disclose how they're incorporating national and international safety standards into their development processes.
  • Report critical incidents: Both companies and the public can now report serious safety events to the California Office of Emergency Services.
  • Protect whistleblowers: Employees who raise concerns about health and safety risks from AI systems gain legal protection.
  • Support public AI research: Through a new consortium called CalCompute, California is building a public computing cluster for developing "safe, ethical, equitable, and sustainable" AI.

The Industry Pushback — and What Got Weakened

The tech industry lobbied hard, and it shows. The final version of TFAIA is considerably softer than earlier drafts:

Incident Reporting Narrowed: Companies are only required to report events that result in physical harm. Financial damage, privacy breaches, or other non-physical harms? These aren't covered under mandatory reporting.

Penalties Slashed: The maximum fine for a first-time violation — even one causing $1 billion in damage or contributing to 50+ deaths — dropped from $10 million to just $1 million. Critics note that this creates a troubling cost-benefit calculation for large tech companies, which has arguably played out in other areas.

The message? For billion-dollar corporations, safety violations may be just another line item in the budget.

The Broader Package: 18 Laws Reshaping AI Use

Beyond TFAIA, California's new laws create compliance obligations across multiple industries, many of which took effect in January 2025. For instance:

1) Deepfakes and Election Integrity

California is taking direct aim at AI-generated deception:

  • Criminal penalties for deepfake pornography: Creating or distributing non-consensual intimate images using AI is now a crime (SB 926).
  • Election protections: Laws like AB 2655 and AB 2355 require platforms to label or block "materially deceptive" election content, particularly AI-generated videos or audio that could damage candidates or mislead voters.

Real-world impact: Political campaigns and content platforms must now implement detection and labeling systems before the 2026 election cycle.

2) Your AI Data Is Now Personal Information

Here's a change that affects everyone: AI-generated data about you is now officially "personal information" under California's Consumer Privacy Act (AB 1008).

What does this mean practically?

  • AI systems that create profiles, predictions, or inferences about you must now treat that output data with the same protections as traditional personal information.
  • You gain new rights to access, delete, and control AI-generated data about yourself.
  • Neural data — information about your brain activity — gets even stronger protection as "sensitive personal information" (SB 1223).

3) The Workplace: No More AI Autopilot

New regulations from California's Civil Rights Department, effective October 1, 2025, fundamentally change how AI can be used in employment:

The Core Rule: Employers can't use automated decision systems (ADS) that discriminate based on protected categories under the Fair Employment and Housing Act.

The Requirement: Companies should conduct bias audits of their AI tools used for hiring, promotion, and evaluation.

The Shift: This moves liability away from proving intent to discriminate and toward demonstrating impact. If your AI tool produces discriminatory outcomes — even unintentionally — you're exposed to legal risk. This is not dissimilar to recent shifts in the children's privacy law landscape that impose specific constructive knowledge standards.

Practical example: That resume-screening AI you're using? You need documentation showing you've tested it for bias against protected groups. No audit? You're rolling the dice.

4) Healthcare: Keeping Humans in the Loop

California's new healthcare AI laws establish a critical principle: algorithms can't make final medical decisions.

Under SB 1120, AI systems are prohibited from independently determining medical necessity in insurance utilization reviews. A physician must make the final call.

Why this matters: This protects patients from algorithmic denials while still allowing AI to assist with analysis and recommendations. It's a model other states are already adopting.

What This Means for Your Business

If You're a Tech Company

Immediate action items:

  • Review your AI systems against the new compliance requirements.
  • Document your safety practices and bias testing procedures.
  • Establish whistleblower protection policies.
  • Prepare for increased scrutiny from California regulators.

Strategic consideration: California's strictest-in-the-nation rules often become de facto national standards. Building for California compliance now may save costly adjustments later.

If You Use AI Tools

Questions to ask your vendors:

  • Have you conducted bias audits on this system?
  • What happens if your AI produces a discriminatory outcome?
  • Do your contracts shift all liability to us?
  • How do you handle California's new data privacy requirements?

Red flag: Vendors that can't answer these questions clearly, or whose contracts dump all AI-related liability onto you, pose significant risk.

If You're in Healthcare

Priority actions:

  1. Review all AI-assisted utilization review processes to ensure physician oversight.
  2. Train staff on new disclosure requirements for AI in patient interactions.
  3. Document human review procedures for all AI-driven medical decisions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More