The Federal Trade Commission on October 23, 2017, provided guidance on how it will enforce the Children's Online Privacy Protection Act (COPPA)1 with respect to audio recordings of children. This comes as part of a wave of increased attention to children's privacy on websites, apps, and internet-connected toys. In this article, we summarize the new guidance and provide five practical tips that app developers and toymakers should follow to comply.

COPPA Requires Parental Consent to Collect Children's Personal Info

Under COPPA, voice recordings are considered personal information. The law imposes notice and verifiable parental consent requirements on the operators of websites or online services that are directed toward children under the age of 13 or for operators who have actual knowledge they are collecting or storing "personal information" from children under the age of 13. In 1991, the FTC issued the COPPA Rule,2 which implements COPPA and defines "personal information" to include data such as names, addresses and social security numbers. In 2013, the FTC updated the COPPA Rule and expanded the definition of personal information to include photographs, videos and audio files.

The FTC Now Says It Will Not Require Parental Consent if Children's Voices Are Recorded Solely to Replace Written Words and Are Immediately Deleted After Transcription

After the 2013 amendment to the COPPA Rule, companies sought an exemption from COPPA's parental consent requirement when they collected audio not to personally identify a child, but to initiate a voice command from the child, such as a speech-to-text search function or fulfilling a request.

The FTC finally agreed. In its guidance last month, the FTC acknowledged the value of voice commands for certain consumers, such as children who have not yet learned to write, or the disabled. Accordingly, the FTC announced a new enforcement policy where it will not enforce the parental consent requirement for the limited circumstances where audio of a child's voice is collected solely for the purpose of replacing written words and is then immediately deleted after transcription. Two examples provided by the FTC include:

  • Converting speech to text to perform a search or
  • Collecting audio for the purpose of instructing a command or request to the website, app or toy.

While these examples are still considered "collection" under COPPA, operators do not need to obtain parental consent as long as the audio file is promptly destroyed after the brief moment necessary to carry out the purpose of replacing written words. Yet the operator must still include these collection practices in its privacy policy. The FTC wants parents to know that these files are collected only for a limited use and thereafter destroyed.

Five Tips to Help Comply with the New Guidance

As with any regulation as technical as COPPA, collection practices will be analyzed on a case-by-case basis. That said, the following are steps that leading manufacturers of children's toys and mobile apps are taking to comply:

  1. Assessing whether company websites, apps or internet-connected toys collect audio files from children: Understanding whether audio files are being collected and how they are being used is the first step to determining the impact of the new COPPA guidance. Generally, online services that are directed to children under 13 need verifiable parental consent before collecting voice recordings. Under the new guidance, a company may not have to get verifiable parental consent before collecting audio files from children as long as the audio is being used to replace text-based commands and is not used for any other purpose (like behavioral profiling or identification), are not shared, and are deleted immediately after replacing written words.
  2. Including collection and deletion practices in company privacy policies, even if they collect audio files solely to replace written words: The FTC wants to keep parents informed.
  3. Reviewing company security (in response to FBI warnings): While not part of the FTC guidance, leading companies are reviewing the security, access controls, encryption, monitoring and other safeguards around children's voice recordings. In October and July, the FBI issued a public service announcement warning of its growing concern of cyber criminals targeting unsecure IoT devices, including toys. Specifically, the FBI issued a warning to consumers that while many of these toys are purchased for their ability to tailor behaviors based on user interactions, the data that is collected from microphones, cameras and sensors could put the privacy and security of the child at risk if the company lacks adequate security measures.
  4. Assessing global/GDPR obligations: While audio files are not specifically addressed in other legislation, the forthcoming EU General Data Protection Regulation and other regulations globally are starting to increase in the countries where parental consent is required prior to collecting personal data of children younger than 13 (and even up to 16 in certain EU member countries).
  5. Using machine learning cautiously and disclosing in company privacy policies: Parental consent is still needed if the toy or app asks for personal information by voice, such as asking users to say their name. Yet, it is unclear how the FTC will approach the collection of audio files that are run through machine learning processes or other databases in order to improve abilities to respond to a voice command or requests based on who's asking. Leading companies are starting to disclose in their privacy policies machine learning given that the FTC might consider machine learning to be a use beyond solely replacing written words, and that would require parental consent in advance.

Footnotes

1 15 U.S.C. §§ 6501-6505.

2 16 CFR Part 312.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.