Washington, D.C. (September 17, 2025) - On September 11, 2025, the Federal Trade Commission (FTC) initiated a significant inquiry into the practices of seven prominent companies regarding their AI-powered chatbots. This action underscores the FTC's commitment to safeguarding children and teens in the digital landscape while encouraging technological innovation.
The FTC's orders seek comprehensive information on how these companies measure, test, and monitor the potential negative impacts of their chatbots on young users. The inquiry is particularly focused on the safety of these chatbots when used as companions by children and teens. Specifically, the FTC has requested information about how the companies monetize user engagement, generate outputs in response to user inquiries, and mitigate negative impacts. The FTC's decision to proceed with this inquiry was unanimous, reflecting a strong consensus on the importance of this issue.
The inquiry comes on the tails of a lawsuit filed against OpenAI by the parents of a teenager who died by suicide earlier this year. The parents allege that the chatbot advised the teen on methods of suicide and even offered to write the first draft of his suicide note. Prior to the FTC's intervention, OpenAI had already announced plans to implement extra protections for those under 18 years old.
Businesses across all fields are implementing these predictive chatbots to support their consumers, typically for customer service purposes. Reports show that the generative AI chatbot market is already worth nearly $10 billion and expected to reach over $66 billion by 2032.
Companies involved in the development and deployment of AI-powered chatbots should closely monitor the outcomes of this inquiry. It is crucial to ensure that their practices align with the FTC's expectations regarding child safety and privacy compliance.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.