ARTICLE
17 October 2025

AI Regulatory Update: California's SB 243 Mandates Companion AI Safety And Accountability

JW
Jones Walker

Contributor

At Jones Walker, we look beyond today’s challenges and focus on the opportunities of the future. Since our founding in May 1937 by Joseph Merrick Jones, Sr., and Tulane Law School graduates William B. Dreux and A.J. Waechter, we have consistently asked ourselves a simple question: What can we do to help our clients succeed, today and tomorrow?
On October 13, 2025, Governor Gavin Newsom signed Senate Bill 243 into law, making California the first state to mandate specific safety safeguards for AI companion...
United States California Technology
Jason Loring’s articles from Jones Walker are most popular:
  • within Technology topic(s)
  • in United States
Jones Walker are most popular:
  • within Technology, Government, Public Sector, Litigation and Mediation & Arbitration topic(s)

On October 13, 2025, Governor Gavin Newsom signed Senate Bill 243 into law, making California the first state to mandate specific safety safeguards for AI companion chatbots used by minors. The legislation is a direct response to mounting public health concerns and several high-profile incidents involving teen self-harm and suicide allegedly linked to interactions with conversational AI. With an effective date of January 1, 2026, SB 243 establishes a new regulatory baseline for the companion AI industry.

Key Regulatory Requirements

The law imposes affirmative duties across three critical areas: Disclosure, Safety Protocols, and Accountability.

Disclosure and Break Reminders

  • AI Disclosure (General Users): If a "reasonable person" would be misled to believe they are interacting with a human, operators must issue a "clear and conspicuous notification" that the companion chatbot is artificially generated and not human.
  • AI Disclosure (Minors): For users the operator knows are minors, operators must disclose that the user is interacting with artificial intelligence and provide clear and conspicuous notifications at least every three hours during continuing interactions reminding the user to take a break and that the chatbot is AI-generated.
  • Suitability Warning: Operators must disclose on the application, browser, or other access format that companion chatbots may not be suitable for some minors.

Content and Safety Protocols

  • Crisis Prevention Protocol: Operators must maintain a protocol for preventing the chatbot from producing suicidal ideation, suicide, or self-harm content to the user.
  • Crisis Referrals: The protocol must include providing notifications that refer at-risk users to crisis service providers (including suicide hotlines or crisis text lines) when users express suicidal ideation, suicide, or self-harm.
  • Protocol Publication: Operators must publish details of their crisis prevention protocol on their website.
  • Content Guardrails for Minors: Operators must institute reasonable measures to prevent chatbots from producing visual material of sexually explicit conduct or directly stating that minors should engage in sexually explicit conduct.
  • Evidence-Based: Operators must use evidence-based methods for measuring suicidal ideation.

Reporting and Accountability

  • Annual Reporting (Beginning July 1, 2027): Operators must submit an annual report to the California Department of Public Health's Office of Suicide Prevention detailing:
    • The number of times the operator issued crisis service provider referral notifications in the preceding calendar year;
    • Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and
    • Protocols put in place to prohibit companion chatbot responses about suicidal ideation or actions with the user.
  • Private Right of Action: The law creates a private right of action allowing any person who suffers injury in fact as a result of a violation to pursue:
    • Injunctive relief;
    • Damages equal to the greater of actual damages or $1,000 per violation; and
    • Reasonable attorney's fees and costs.

Business Implications

Passed with overwhelming bipartisan support (Senate 33-3, Assembly 59-1), SB 243 establishes California as a significant regulatory trendsetter in AI governance. For companies operating companion chatbot platforms, immediate action is required:

Initial Compliance Assessment

  • Scope Analysis: Carefully review whether your AI systems fall within the narrow statutory definition of "companion chatbot" or qualify for the statutory exclusions (customer service bots, video game characters with limited dialogue, voice-activated virtual assistants without sustained relationships).

Operational and Technical Directives

  • Compliance Review: Review and update all platform protocols to comply with new disclosure and break reminder requirements, paying particular attention to the different standards for general users versus minors.
  • Crisis System Development: Develop and rigorously document protocols for preventing chatbots from producing harmful content related to suicide and self-harm. Ensure protocols include mandatory crisis referral mechanisms and publish protocol details publicly on company websites.
  • Age Detection and Content Filtering: Implement or refine age detection mechanisms to identify minor users and content filtering systems to prevent minors from being exposed to sexually explicit content or prompts.
  • Data Systems: Establish data collection and tracking systems to accurately capture and report the required metrics to the Department of Public Health starting in 2027, while ensuring no personal identifiers are included in reports.

Broader Context

California's action follows similar legislative efforts in states like Utah and Texas focused on regulating AI interactions with minors. This law carries particular weight given California's status as a global hub for AI companies and its history of setting de facto national standards for technology regulation.

The legislation has received industry support, with companies like OpenAI praising the measure as a "meaningful move forward" for AI safety standards. Governor Newsom also signed a comprehensive package of related bills on the same day, including AB 1043 (age verification for app stores) and AB 56 (social media warning labels), signaling California's broad commitment to youth digital safety.

As AI governance frameworks continue to evolve, SB 243 represents a significant shift toward mandating affirmative safety measures rather than relying solely on post-harm liability. Companies should closely monitor similar legislative proposals across the country and prepare for potential federal action in this rapidly emerging regulatory space.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More