Texas has led the USA in a new data privacy enforcement action as the Attorney General has launched an investigation into Character.AI and 14 other leading social media platforms over children's privacy practices.
Background
Ken Paxton, Attorney General of Texas ("the Attorney General"), has launched investigations into Character.AI (an AI platform which enables users to make digital personalities and simulate human conversations). The Attorney General will also investigate 14 other social media platforms, including Reddit, Instagram, and Discord regarding their children's data privacy and safety practices, in accordance with the Securing Children Online through Parental Empowerment ("SCOPE") Act and the Texas Data Privacy and Security Act ("TDPSA").
The SCOPE Act protects children from harmful, deceptive, or unfair trade practices relating to the use of digital services, as well as implementing enhanced privacy protections. The SCOPE Act also requires parents to provide consent for the use of such software and provide parents with tools to manage and control the privacy settings on their child's account. Further, TDPSA enforces strict notice and consent requirements on companies that process minors' personal data. The protections of both the SCOPE Act and TDPSA extend to how children engage with AI products whilst using such platforms.
The Attorney General has been no stranger to enforcement action, having also launched a recent lawsuit against TikTok for violating the SCOPE Act, whilst in July 2024 he secured a historic $1.4 billion settlement for the State of Texas with Meta for unlawfully collecting and using facial recognition data.
The Attorney General maintained that "these investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm."
Character.AI's Reforms
The recent explosion in AI development has enabled chatbots, such as Character.AI ("the site"), to become more prominent and realistic.
The site is currently facing two lawsuits in the USA and has been referred to as a "danger" to young people. For example, the site currently faces legal action in relation to how it has handled child safety in the past, with one family alleging that it actively promoted violence when the site told a teenager that murdering his parents was a "reasonable response" to them limiting his screen time.
As a result, a spokesperson for the site maintained that "as a company, we take the safety of our users very seriously." The site is overhauling the way it works for teenagers, and it has vowed that safety will be "infused" in how it operates from now on by utilising new features which will keep parents informed of how their child is using the platform. In addition, the site will also remind users that they are talking to a fictional chatbot as opposed to a real person and it has expanded its trust and safety team.
The site's attempt at reform has been welcomed by social medial expert, Matt Navarra, who believes they are "tackling an important vulnerability", namely the potential for misuse or for young users to encounter inappropriate content, and are acknowledging "the evolving expectations around responsible AI development." However, the reforms have not been received as enthusiastically by others; for example, Andy Burrows, Head of the Molly Rose Foundation, has criticised the reforms as a "belated, reactive and completely unsatisfactory response."
What are the Implications of this?
The Attorney General's investigation into Character.AI and other leading social media platforms highlights the continued focus on children's online privacy in the US and should serve as a stark warning that other technology platforms who do not comply with their obligations may face enforcement action.
According to the Attorney General, "Technology companies are on notice that my office is vigorously enforcing Texas's strong data privacy laws. These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm."
Although this development does not have any direct impact on the UK, it very much aligns with the stance taken by the Information Commissioner's Office ("ICO") that protection of children's data online is a key concern and the ICO has indicated that, as part of its Children's Code Strategy, this will remain a priority area in 2025 (see our previous article here).
Therefore, now is a good time for online platforms to review their compliance in relation to children's data and to assess the risks and any appropriate mitigations under relevant legislation, especially when using AI technologies.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.