ARTICLE
27 November 2025

Are AI Chatbots Here To Help Or Harm—or Both? Regulating Minors' Interactions With AI Companion Chatbots

BB
Baker Botts LLP

Contributor

Baker Botts is a leading global law firm. The foundation for our differentiated client support rests on our deep business acumen and technical experience built over decades of focused leadership in our sectors and practices. For more information, please visit bakerbotts.com.
A recent study revealed widespread use of AI chatbots among teenagers—in fact, nearly three in four teens reported using AI chatbots, such as ChatGPT, Google Gemini, Microsoft Copilot...
United States California Technology
Ariel House’s articles from Baker Botts LLP are most popular:
  • within Technology topic(s)
  • with Finance and Tax Executives and Inhouse Counsel
  • with readers working within the Technology industries

A recent study revealed widespread use of AI chatbots among teenagers—in fact, nearly three in four teens reported using AI chatbots, such as ChatGPT, Google Gemini, Microsoft Copilot, Claude, Grok by X, or Meta AI. Data also confirm that even general-purpose AI chatbots like ChatGPT are frequently being used as "companion" chatbots to engage in extensive interactions involving "deeply personal" issues, including "life advice, coaching, and support." Another recent study found that a significant number of teens reported use of AI chatbots by themselves or their peers in potentially troubling ways, such as to have a romantic relationship (19%), as a friend or companion (42%), or for mental health support (42%).

Tragically, some of these virtual relationships between teens and technology have resulted in real-world harm. Several lawsuits have been filed against AI companies, alleging that teens became engaged in problematic and abusive relationships with AI chatbots, with deadly consequences. For example, lawsuits have been filed alleging that AI chatbots exacerbated teens' isolation from their family and friends and encouraged them to commit suicide. In one instance, ChatGPT allegedly provided advice to a 16-year-old on methods to kill himself and even offered to write his suicide note.

The rampant usage of AI chatbots among teens and younger adults—and the attendant risks associated with interacting with these chatbots—has raised concerns among lawmakers. A new law was recently introduced in Congress that would impose significant new requirements on companies providing AI chatbots, including a total ban on all minors using certain kinds of AI chatbots. Meanwhile, California recently became the first state to pass a law imposing some moderate protections on minors interacting with certain AI chatbots.

Proposed Federal Legislation: The Guidelines for User Age-Verification and Responsible Dialogue Act of 2025 ("GUARD Act")
A new bipartisan bill was recently introduced by Senators Josh Hawley (R–Mo.), Richard Blumenthal (D–Conn.), Katie Britt (R–Ala.), Mark Warner (D–Vir.), and Chris Murphy (D–Conn.) that would impose strict new federal requirements on how AI companies design and manage certain chatbots, particularly when minors are involved.

  • Prohibition on Use of AI Companions by Minors and Age-Verification Requirements. The GUARD Act specifically targets a certain kind of chatbot, which the law defines as "AI companions," that simulate friendship, companionship, interpersonal or emotional interaction, or therapeutic communication. This definition makes clear that the law is directed toward chatbots that are designed to form human-like connections, rather than more limited-purpose assistants. Among other key features, the GUARD Act would prohibit minors (those under age 18) from interacting with AI companions. The bill would require AI companies to ensure that individuals accessing an artificial intelligence chatbot create a user account, and establish that they are over the age of 18, through the implementation of "reasonable age-verification measures." Those measures could include the provision of a government-issued identification card or other proven tools, but would require more than merely entering the user's birthdate. If the user is unable to verify that they are over the age of 18, the company must prohibit the user from accessing an AI companion.
  • Required Disclosures. The proposed law would also require all artificial intelligence chatbots to (i) "clearly and conspicuously disclose to the user that the chatbot is an artificial intelligence system and not a human being," both at the initiation of each conversation and following 30-minute intervals; and (ii) explain that the chatbot does not provide medical, legal, financial, or psychological services, and that users should consult a licensed professional for such advice, at the initiation of each conversation and "at reasonably regular intervals." The law would also prohibit chatbots from representing that they are a human being or a licensed professional.
  • New Criminal and Civil Penalties. Companies offering chatbots that (i) solicit, encourage, or induce minors to discuss, describe, or engage in sexually explicit conduct, or (ii) encourage, promote, or coerce suicide, self-harm, or imminent physical or sexual violence, can be subject to a fine and civil penalty of up to $100,000 per offense.

The introduction of the GUARD Act, with its broad bipartisan support, marks a departure from Congress's previous deregulatory stance when it comes to AI development. The proposed bill signals that Congress may be ready to mandate stricter rules on AI when it comes to the safety of minors. This law, if passed, could potentially reshape how AI companies design and manage their chatbots, particularly with respect to requiring user accounts, incorporating disclosures into interactions, and implementing age-verification measures (along with addressing the associated privacy concerns).

New California Law: SB 243
In the absence of comprehensive federal legislation addressing AI-related issues, states—particularly California—have passed their own laws to regulate interactions with AI chatbots. The California legislature recently passed a landmark piece of legislation, the Leading Ethical AI Development ("LEAD") for Kids Act, AB 1064, but it was vetoed by Governor Gavin Newsom. AB 1064 would have prohibited minors from using companion chatbots if the chatbots were foreseeably capable of certain potentially harmful activities, such as encouraging self-harm, consumption of drugs or alcohol, or disordered eating; offering unsupervised mental health therapy; encouraging illegal activity; and engaging in erotic or sexually explicit interactions, among others. In his veto message, Governor Newsom expressed concern that AB 1064 imposed such broad restrictions "that it may unintentionally lead to a total ban on the use of these products by minors," but that it is "imperative" for "adolescents [to] learn how to safely interact with AI systems."

Instead, on October 13, 2025, Governor Newsom signed Senate Bill 243 ("SB 243"), which made California the first state to impose safeguards on AI "companion chatbots." The California law broadly defines "companion chatbots" as AI systems that provide "adaptive, human-like responses" and are "capable of meeting a user's social needs," including by "being able to sustain a relationship across multiple interactions." SB 243 makes clear that "companion chatbots" do not include customer service or technical assistance bots, among other express exclusions. However, most general-purpose large language models that have been programmed to provide some social or emotional language are likely within the scope of the new law, and this likely will only become even more true as AI systems evolve and become more sophisticated over time. SB 243 imposes some new requirements on all AI companion chatbots, with special measures that apply when being used by a minor.

  • Required Disclosures for General Users. If a "reasonable person" interacting with a companion chatbot would be misled to believe that they are interacting with a human, there must be a "clear and conspicuous notification" that the chatbot is AI and not human. (Notably, some AI systems have been programmed to claim that they are human when asked.) Additionally, all AI systems must disclose "that companion chatbots may not be suitable for some minors."
  • Required Disclosures for Minors. When the user is a minor, the AI system must (i) disclose that the user is interacting with artificial intelligence; (ii) provide a "clear and conspicuous notification" at least every three hours during continuing interactions to remind the user to take a break, and that the chatbot is AI and not human; and (iii) institute "reasonable measures" to prevent the chatbot from producing sexually explicit conduct or encouraging the minor to engage in sexually explicit conduct.
  • Required Protocols and Regulatory Reporting. AI operators must submit annual reports to the California Office of Suicide Prevention, which detail (i) their protocols to prohibit companion chatbot responses about suicidal ideation or actions; (ii) their protocols to detect, remove, and respond to instances of suicide ideation by users; and (iii) the number of crisis service provider referral notifications issued in the previous year.
  • Civil Penalties. The law creates a private right of action (which raises the possibility of class actions) allowing any consumer who suffers injury in fact as a result of a violation to bring a civil action to recover remedies, including monetary damages, which could be actual damages or $1,000 per violation, whichever is greater; injunctive relief; and reasonable attorney's fees and costs. The law also provides for Attorney General enforcement.

Conclusion
AI companies should promptly determine whether their chatbots are within the scope of the new California law and, if so, ensure their compliance with the new legal requirements. While California enacted a fairly minor set of new requirements on AI companion chatbots under SB 243, the potentially onerous requirements of the proposed GUARD Act—including the complete ban on their use by minors—should make AI companies seriously consider updates to the design and management of their chatbots. AI companies should keep a close eye on the rapidly evolving legislative landscape and potentially serious issues associated with the deployment of their technology.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More