Last week, South Carolina Attorney General Alan Wilson proclaimed, "[i]f Congress won't protect kids from Big Tech, states will." State AGs from 27 states, led by the Utah Attorney General's office, submitted an amicus brief in the Computer & Communications Industry Association, et al. v. James Uthmeier case on appeal before the Eleventh Circuit. In that case, the plaintiff challenges the legality of Florida's content moderation law, H.B.3 aka Florida Statute § 501.1736, which was enacted to prohibit certain social media platforms with "addictive features" from providing accounts to children 13 and younger and from providing accounts to children ages 14-15 without parental consent. The lower court blocked Florida from enforcing the law, finding that the law implicates the First Amendment yet fails the intermediate scrutiny test because it burdens protected speech of under 16 year olds "despite the availability...of substantially less burdensome alternatives." In their brief, the states argue that § 501.1736 is content neutral and readily passes intermediate scrutiny, and therefore should be enforced. The states cite their interests in the case by describing "adverse mental health outcomes among teenagers" and allegedly addictive elements of social media, noting that "[a]mici States have compelling interests in protecting children and youth from the harmful effects of excessive social media use."
Days later, 44 AGs wrote a NAAG letter to 12 AI companies to "inform" the companies of the AGs' "resolve to use every facet of [their] authority to protect children from exploitation by predatory artificial intelligence products." While the letter gives credit to the innovative technology of AI, it encourages the AI companies to "succeed without sacrificing the well-being of our kids in the process."
In the letter, the AGs explain that recently revealed documents from one company demonstrate an awareness of exposure of children to "highly inappropriate sexualized content" including chat conversations with AI, but they also recognized that these "risks [are not] isolated" to one company. The AGs further state that children are consumers, too, and that conduct is not excusable just because it is "done by a machine" versus a human. They explain that this is not the first time they have needed to address "new and powerful technology" in Big Tech — alluding to their past Big Tech enforcement actions — and assert that children cannot become victims again. The letter encourages the companies to use their "opportunities to exercise judgment" wisely, saying what it comes down to is: "Don't hurt kids." It further promises AGs will hold the companies accountable, and "wish[es] [them] all success in the race for AI dominance."
This is not the first time AGs have been at the forefront on supporting or enforcing social media laws and policing AI companies. And it definitely won't be the last. In fact – just this week the 47 AGs announced another NAAG letter aimed at addressing "deepfake pornography." Stay tuned as we continue our coverage.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.