ARTICLE
22 January 2026

AI Chatbots Face Rising Legal And Legislative Scrutiny

KD
Kelley Drye & Warren LLP

Contributor

Kelley Drye & Warren LLP is an AmLaw 200, Chambers ranked, full-service law firm of more than 350 attorneys and other professionals. For more than 180 years, Kelley Drye has provided legal counsel carefully connected to our client’s business strategies and has measured success by the real value we create.
AI chatbots are attracting tough regulatory scrutiny. Here's what you need to know if your chatbot is the target of a subpoena or civil investigative demand (CID).
United States Technology
Laura Riposo VanDruff’s articles from Kelley Drye & Warren LLP are most popular:
  • within Technology topic(s)
Kelley Drye & Warren LLP are most popular:
  • within Real Estate and Construction topic(s)

AI chatbots are attracting tough regulatory scrutiny. Here's what you need to know if your chatbot is the target of a subpoena or civil investigative demand (CID). 

AI chatbots are no longer a novelty. They are now embedded in social media platforms, search engines, and educational tools. Providers of AI chatbots face mounting scrutiny nationwide, as advocacy groups and legislators question the effects of chatbots on mental health, particularly for children. Parents and former users have sued OpenAI over what they allege to be ChatGPT's failure to prevent suicidal ideation, bringing claims that include allegations of assisted suicide, wrongful death, and involuntary manslaughter.

As chatbot providers face a wave of high-profile wrongful-death and product-liability suits, both regulators and lawmakers have taken notice. As we detail further in this blog post, in recent months we've seen:

  • Most substantive provisions of California's first companion-chatbot statute take effect (SB 243);
  • Similar bills regulating AI chatbots introduced in Florida (HB 659), Massachusetts (S 264/S 243, Missouri (HB 2032/HB 2031), New Jersey (A 6246), Pennsylvania (SB 1090/HB 2006, Washington (SB 5870), and Tennessee (HB 1455/SB 1493);
  • A California ballot measure that could dramatically alter AI safety obligations within the state;
  • A letter from 42 attorneys general warning that "sycophantic and delusional" chatbot outputs may violate consumer protection and children's privacy laws; and
  • A federal enforcement push, including the FTC's 6(b) inquiry.

As enforcement ramps up in this space, AI chatbot companies should expect increased scrutiny and could receive subpoenas or civil investigative demands (CIDs) from regulators at the state or federal level seeking information about their business practices. Businesses can prepare by understanding the applicable legal landscape, building appropriate internal processes and procedures to address stated concerns from regulators and legislators, and consulting with counsel as soon as possible when receiving a regulatory inquiry.

Here's a roundup of the latest updates:

California Takes a Leading Role in AI Chatbot Regulation with SB 243

In October, California Governor Gavin Newsom signed SB 243, California's first law regulating "companion chatbots." The law requires developers to clearly notify users when they are interacting with an AI chatbot and implement additional safeguards for minors, such as providing reminders every three hours to the user to take a break and that the chatbot is "not human." The new law took effect as of January 1, 2026. SB 243 marks one of the first attempts to directly regulate AI chatbots designed to simulate human relationships. The law defines a "companion chatbot" specifically as an AI system that provides "adaptive, human-like responses to user inputs" and that is "able to sustain a relationship across multiple interactions." This definition excludes customer service chatbots, video game features, and common voice-activated assistants like Alexa and Siri, focusing instead on AI systems that can sustain ongoing, emotionally engaging interactions with users.

Under the new law, operators, defined as any person who makes a companion chatbot available to users in California, must take certain actions, including:

  • In cases where users might be "misled to believe that the person is interacting with a human," inform users that they are interacting with an AI companion chatbot.
  • For users under eighteen, disclose that the user is interacting with AI; provide an additional recurring notification every three hours reminding the minor that they are interacting with a chatbot; and institute reasonable measures to prevent certain explicit content.

Compromise Ballot Measure in California Advances Public Debate on Kids AI Safety

In California, efforts to regulate children's use of AI remains unsettled following Governor Newsom's veto of AB 1064, which would have banned companies from offering AI companion chatbots in California except under certain circumstances. In the wake of that veto, Common Sense Media advanced a proposed ballot initiative aimed at establishing a comprehensive AI-safety framework for systems used by children, including a risk-tiered approach modeled in part on the European Union's AI Act. After public opposition from OpenAI and the introduction of its own competing proposal, Common Sense Media and OpenAI have since agreed to a revised compromise initiative, the "Parents & Kids Safe AI Act," which would introduce age-assurance requirements and new limits on the sale of children's data. If the initiative can gather enough signatures required to qualify for the ballot, the measure will head to voters this November.

Federal Lawmakers and Regulators Ramp Up Scrutiny of AI Chatbots

Developments in California are part of a broader wave of scrutiny surrounding AI chatbots. In addition to legislative initiatives, the Federal Trade Commission (FTC) issued 6(b) orders late last year to Google, Character.AI, Meta, Snapchat, and OpenAI, requiring these companies to provide detailed information regarding their AI companion products. In its orders, the FTC stated that it seeks information regarding how these companies:

  • Monetize user engagement, including through subscriptions and in-app purchases;
  • Share user data with third parties;
  • Develop and deploy chatbots, including the use of personalization and companion chatbots;
  • Measure and test for negative impacts after deployment;
  • Mitigate potential negative impacts to children;
  • Collect, retain, and delete user information; and
  • Make and substantiate public representations about chatbot capabilities, safety, and suitability for minors.

Congress is also putting pressure on the FTC to take action on AI companion chatbots. On October 21, 2025, California Senators Alex Padilla and Adam Schiff wrote to the FTC, urging the Agency to broaden its 6(b) inquiry. Their letter called upon the FTC to examine whether AI chatbot companies have adequate tools to detect and respond to mental health crises and disclose the limitations of their current safeguards.

Looking Ahead

California's SB 243 represents one of the first legislative efforts to regulate emotionally interactive AI systems, signaling growing state and federal concern about the mental health impacts of AI technologies. The law's disclosure and reporting requirements establish a framework that other jurisdictions are likely to look to as they develop their own AI regulatory approaches. With the opening of the 2026 legislative session, we'll continue to monitor new AI chatbot laws as they are introduced this year.

With significant legislative and regulatory activity at the state and federal level, companies deploying AI chatbots should expect heightened scrutiny and consider steps to assess and shore up their compliance and risk-management practices.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More