State legislatures across the country are moving forward with legislation to regulate use of AI for many purposes, including election activity and political communications.

These new laws, which often have bipartisan support, build upon a trend to regulate deep fakes or synthetic media—artificial production, manipulation, and modification of data and media by automated means—including 2019 laws passed in Texas and California.

While these laws have varying scope and may raise constitutional concerns, current campaigns should be aware how this changing legal landscape can affect political activity and advertising. Similarly, digital, social, and traditional media platforms should be aware of their potential liability for publishing synthetically altered media in certain jurisdictions.

Leading the Way

In 2019, Texas passed a law making it a misdemeanor to distribute a deep fake video within 30 days of an election if done with the "intent to injure a candidate or influence the result of an election."

Californiaalso passed a 2019 law, extended in September 2022, to prohibit any person, committee, or entity from, within 60 days of an election, "distribut[ing], with actual malice, materially deceptive audio or visual media ... of the candidate with the intent to injure the candidate's reputation or to deceive a voter into voting for or against the candidate."

This prohibition doesn't apply to synthetically altered media that includes a disclosure stating, "This [image/video/audio] has been manipulated." Candidates whose images or voices are altered in violation of the law have a private right of action to seek relief.

Joining the Trend

In May, Minnesota passed a law prohibiting dissemination of a deep fake within 90 days of an election, with intent to injure a candidate or influence the result of an election, and without the consent of the depicted individual. There's no exception for communications with disclosures indicating the content is synthetically altered.

Affected candidates can sue for injunctive relief. The law allows affected candidates to seek relief from platforms that disseminate paid media. This law doesn't specify whether candidates are permitted to disseminate synthetically altered images or videos of themselves without any disclaimer to viewers.

Washington passed a law in July that prohibits publication of any images, audio recordings, or videos of a candidate's "appearance, speech, or conduct that has been intentionally manipulated with the use of generative adversarial network techniques or other digital technology" and leads a reasonable individual to conclude the media is real.

Like the California and Minnesota laws, this prohibition doesn't apply to synthetically altered media that includes a disclosure stating, "This image/video/audio has been manipulated."

Affected candidates can sue for injunctive and equitable relief. Advertising mediums are exempt from suit under this law as long as the medium doesn't remove any disclaimers or otherwise alter the communication.

Similar Legislation

The New York legislature is considering legislation that would make distribution of synthetic media, "with intent to injure a candidate or unduly influence the outcome of an election" within 60 days of an election a Class E felony.

The law as drafted doesn't have a specific exclusion for synthetic media containing a disclaimer. However, the law wouldn't apply to synthetic media "that was created for the purpose of political or social commentary, parody, or artistic expression that is not disseminated or published with the intent to misrepresent its authenticity." This appears to create a significant exception under which campaigns could use synthetic media. The proposed law wouldn't apply to publication platforms or services.

A legislative package to regulate use of AI-generated material in political campaign advertisements is also working its way through the Michigan legislature.

Other states, such as New Jersey and Wisconsin, are considering comprehensive legislation that could also have implications on AI's use for political activity.

In addition to numerous states exploring these types of options, Congress has been grappling with legislation to regulate use of AI in elections, most recently with the introduction of the Protect Elections from Deceptive AI Act. Also, the Federal Election Commission is considering whether to open rulemaking on use of AI in certain settings.

But as the 2024 election cycle revs up, state legislatures are clearly leading the way in enacting new laws regulating use of AI and synthetic media in political communications.

All stakeholders—candidates, campaigns, and media platforms—should remain attuned to these developments before creating and publishing synthetically altered content, particularly given the rapidly-changing legal landscape.

Previously published Bloomberg Law and Bloomberg Tax.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.