On December 20, 2019, President Trump signed the nation's first federal law related to "deepfakes." Deepfakes are false yet highly realistic artificial intelligence-created media, such as a video showing people saying things they never said and doing things they never did. The deepfake legislation is part of the National Defense Authorization Act for Fiscal Year 2020 (NDAA), the $738 billion defense policy bill, that the President signed into law on Friday, after it was passed by the Senate 86-8 and the House 377-48.1

In two provisions related to this emerging technology, the NDAA (1) requires a comprehensive report on the foreign weaponization of deepfakes; (2) requires the government to notify Congress of foreign deepfake-disinformation activities targeting US elections; and (3) establishes a "Deepfakes Prize" competition to encourage the research or commercialization of deepfake-detection technologies.

The first deepfake-related provision, Section 5709, imposes a reporting requirement and notification provision, both on the Director of National Intelligence (DNI). The law directs that within six months of its enactment, the DNI must submit to the Congressional Intelligence Committees an unclassified report on the potential national security impacts of deepfakes (what it calls "machine-manipulated media" and "machine-generated text") and the actual or potential use of them by foreign governments "to spread disinformation or engage in other malign activities." The DNI is to submit to Congress any significant updates to this report annually.

The report, which the DNI is to write in consultation with the heads of the elements of the US Intelligence Community (IC) that he or she determines appropriate, is to include:

  • An assessment of the technical capabilities of foreign governments regarding deepfakes. The law specifically directs the assessments of the technical capabilities of China and Russia. It also directs the DNI to prepare an annex, which may be classified, describing Chinese and Russian governmental elements and private sector, academic or nongovernmental entities that support or facilitate deepfake research and development or dissemination.
  • An updated assessment of how foreign governments, foreign government-affiliated entities or foreign individuals could use or are using deepfakes to harm US national security interests with respect to "the overseas or domestic dissemination of misinformation," "the attempted discrediting of political opponents or disfavored populations," and "intelligence or influence operations" targeting the United States, allies or jurisdictions that are believed to be subject to Chinese or Russian interference. Ukraine, for instance, would likely fall into this latter category.
  • An assessment of the technologies that can counter deepfakes that have been or could be developed by the US government "or by the private sector with Government support, to deter, detect, and attribute the use of" deepfakes by foreign adversaries (emphasis added). This assessment must include "any emerging concerns related to privacy."
  • A description by the DNI of the IC offices that have or should have the lead responsibility for monitoring the development and use of deepfakes. The DNI must describe in detail the IC's current capabilities and research geared to detect deepfakes, including the speed and accuracy of those assessments.
  • A description of any research and development activities being considered or carried out across the IC.
  • A list of updated recommendations regarding whether the IC needs additional legal authorities, resources or personnel to address the deepfake threat. 

Second, Section 5709 also requires the DNI to notify the Congressional Intelligence Committees "each time" the DNI determines there is credible intelligence that a foreign entity has or is deploying deepfakes "aimed at the elections or domestic political processes of the United States." The DNI must also notify Congress if the disinformation campaign can be attributed to a foreign government, entity or individual.

Third, Section 5724 of the NDAA establishes a deepfakes competition run by the DNI "to award prizes competitively to stimulate the research, development, or commercialization of technologies to automatically detect machine-manipulated media" (emphasis added). The NDAA authorizes the DNI to award up to $5 million total to one or more winners.

While the NDAA was the first bill to become law that contains sections related to deepfakes, two further bills have each passed one Congressional chamber and remain pending in the other. (Several other bills on this topic remain under consideration in various committees.2)

The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act (H.R. 4355) was adopted by the House by voice vote on December 9, 2019, and remains pending in the Senate.3 The IOGAN Act would direct the Director of the National Science Foundation (NSF) to support "merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity." Such research may include fundamental research on technical tools for verifying the authenticity of information and identifying manipulated media, social and behavioral research on the ethics of the technology, and research on public understanding and awareness of deepfakes and best practices for public education.

The bill would also direct the Director of the National Institute of Standards and Technology (NIST) to support research to develop measurements and standards that could be used to examine deepfakes. The NIST Director would also be required to conduct outreach to stakeholders in the private, public and academic sectors on fundamental measurements and standards research related to deepfakes and consider the feasibility of an ongoing public and private sector engagement to develop voluntary standards for deepfakes.

The directors of the NSF and the NIST would be required to submit a report to Congress no later than a year after the bill's enactment on their findings with respect to the feasibility of research opportunities with the private sector and any policy recommendations the Directors have that could facilitate and improve communication and coordination between the private sector, the NSF and relevant federal agencies through the implementation of approaches to detect deepfakes.

The Deepfake Report Act of 2019 (S. 2065) passed the Senate on October 24, 2019, by unanimous consent and remains pending in the House.4 The bill would direct the Department of Homeland Security to issue a report within one year of enactment and every year for five years thereafter on deepfake technology, what it refers to as "digital content forgery technology." Among other things, the report would describe the kind of deepfakes that are used to commit fraud, cause harm and violate federal civil rights; assess the harm of deepfakes to individuals; and assess methods to detect and counter such forgeries.

Other Deepfakes Laws

The NDAA caps a busy year for legislating in this emerging field. In 2019, two states enacted laws criminalizing certain deepfakes. Virginia became the first state in the nation to impose criminal penalties on the distribution of nonconsensual deepfake pornography. The law, which went into effect on July 1, 2019, made the distribution of nonconsensual "falsely created" explicit images and videos a Class 1 misdemeanor, punishable by up to a year in jail and a fine of $2,500.5

On September 1, 2019, Texas became the first state in the nation to prohibit the creation and distribution of deepfake videos intended to harm candidates for public office or influence elections. The Texas law defines a "deep fake video" as a video "created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality." It makes it a Class A misdemeanor, punishable by up to a year in the county jail and a fine of $4,000, for a person to "create[]" a deepfake video and "cause[]" that video "to be published or distributed within 30 days of an election," if the person does so with the "intent to injure a candidate or influence the result of an election."6

California enacted two laws in October 2019 that, collectively, allow victims of nonconsensual deepfake pornography to sue for damages and give candidates for public office the ability to sue individuals or organizations that distribute "with actual malice" election-related deepfakes without warning labels near Election Day.7

As the experience of the past year shows, the legislation in this area is changing rapidly as policymakers wrestle with new and emerging deepfake-related threats to national security, individuals and businesses.

Footnotes

1 S. 1790, 116th Cong. (2019). The deepfake-related provisions were originally part of a standalone bill, "The Damon Paul Nelson and Matthew Young Pollard Intelligence Authorization Act for Fiscal Years 2018, 2019, and 2020," which was incorporated into the NDAA.

2 See Matthew F. Ferraro, Deepfake Legislation: A Nationwide Survey, WilmerHale Client Alert, September 25, 2019.

3 H.R. 4355, 116th Cong. (2019).

4 S. 2065, 116th Cong. (2019).

5 Ferraro, Deepfake Legislation, at 15-16 (discussing Va. Code Ann. § 18.2-386.2).

6 Id. at 14-15 (discussing Tex. SB 751).

7 Id. at 10-12 (discussing Calif. AB-602 and AB-730).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.