What do videos of Donald Trump hugging Anthony Fauci and Hillary Clinton calling Ron DeSantis "just the kind of guy this country needs" have in common?

No, it's not further evidence that politics makes strange bedfellows.

Rather, it's proof that generative artificial intelligence has reached a point where disinformation, propaganda, and fake news can sway public opinion and reinforce inaccurate assumptions. Deepfake images, newscasts, and videos are cheap to produce and difficult to detect, offering tempting tools for politicians with less than a year until another contentious presidential election.

At the same time, AI is increasingly used to monitor, censor, and suppress speech and expression online. A Freedom House report notes that in the last year, people in more than 50 countries were prosecuted for expressing their opinions online.

"This is a critical issue for our time, as human rights online are a key target of today's autocrats," said Freedom House President Michael J. Abramowitz. "Democratic states should bolster their regulation of AI to deliver more transparency, provide effective oversight mechanisms, and prioritize the protection of human rights."

So, as another contentious presidential election looms, where should countries draw the line between free speech protections and potentially harmful dissemination of fake videos?

Background – Rise of the (AI) Machines

Deepfakes' capabilities for generating hyper-realistic, yet fictitious audiovisual content have outpaced existing laws, leading to concerns about potential violations of privacy and reputation. These uses underscore the ethical and legal challenges posed by the use of AI in political advertising. They also raise questions about consent and authorization. Various proposed tactics aim to address these issues. These include requiring political advertisers to disclose clearly and conspicuously when generative AI tools are used to create content. Publishers and agencies associated with political campaigns should familiarize themselves with existing rules and closely monitor the rapidly changing developments, particularly as we approach the 2024 election cycle.

Despite these concerns, it's important to recognize the benefits of AI technology. It's been used to create realistic digital characters, enhance special effects, and even craft entire storylines for blockbuster movies. Such advancements, however, have blurred the line between reality and fiction, further complicating the discussion surrounding the ethical use of AI. As we move towards an increasingly digitized future, it's crucial that regulations evolve in tandem with technological advancements to ensure a fair and ethical digital space.

Attempts at Regulation

As Shannon Reid mentions in "The Deepfake Dilemma: Reconciling Privacy and First Amendment Protections," creators of deepfakes often have a First Amendment defense in civil claims against them. A deepfake may be considered a form of protected speech, especially if it is deemed "transformative," such as for purposes of parody, satire, or commentary.

Google announced in September that any political advertisements featuring "synthetic content" – defined as AI-generated photos or videos that unauthentically represent real people or events – must include a clear disclosure indicating the presence of AI-generated content. However, this requirement does not extend to ads using AI for inconsequential tasks such as image resizing or color correction.

In the US legislative landscape, lawmakers are also working towards regulating AI-generated political ads. Senator Amy Klobuchar (D-MN) and Representative Yvette Clarke (D-NY) introduced the REAL Political Ads Act, which mandates a disclaimer on any political ads that use AI-generated images or video. In a letter to Meta Platforms CEO Mark Zuckerberg and X Corp. (Twitter) head Linda Yaccarino, they note that "We are already seeing examples of deceptive AI-generated content in political ads that has the potential to deceive voters and disrupt their trust and faith in our elections. A lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues."

However, given the current partisan gridlock, the future of these bills remains uncertain.

Simultaneously, the Federal Election Commission (FEC) is exploring regulatory measures for AI-generated political ads. They've opened public comments on a petition to amend existing regulations, prohibiting candidates or their agents from fraudulently misrepresenting other candidates or political parties. This amendment would extend to deceptive AI-generated campaign ads. Online platforms, political advertisers, and agencies should also be aware of other laws governing online political advertising. The FEC now requires "internet public communications" to disclose who paid for such ads. This move closes a loophole that previously exempted these ads from FEC disclosure requirements.

Moreover, on the state level, new legislation has been enacted to regulate online political ads. These laws range widely, with some extending existing TV and radio ad requirements to online political advertising, while others impose new recordkeeping requirements on online platforms and ad networks. Given the diverse range of state laws, political advertisers and online platforms must diligently review each advertisement to ensure compliance with all legal requirements.

First Amendment Issues

Few would argue that deepfakes are forms of expression, and many forms of expression enjoy freedom of speech protections. While the use of AI to manipulate images and videos is a concern, its use does not represent the first instances of media manipulation to make political opponents look bad. Politicians and their handlers have long resorted to Photoshop, "creative" video edits, and out-of-context quotes. Focusing solely on the technology may overlook the real issue: deception, say several industry insiders.

Some free-speech advocates say that political campaigns already use deceptive techniques either to promote their candidates or to weaken their opponents.

Even before the advent of AI tools, campaigns have used other methods to deceptively edit images, audio, and video, said Ari Cohn, free speech counsel at TechFreedom, a nonprofit that focuses on internet freedom and technology.

"If you think [deceptive ads are] a problem then it would make sense to address it, whether it's created by AI or not," Ari Cohn, a lawyer for TechFreedom told Roll Call. "I'm not sure it makes sense to address it only when an ad is generated by AI."

Creating legislation that narrowly targets deceptive uses of AI without encroaching on standard campaign practices would prove difficult and involve judgment calls about whether alterations made to an image or video are indistinct and cross the line separating deepfakes and acceptable post-production techniques.

This sentiment is echoed by Senator Bill Hagerty (R-TN), who expressed concerns about stifling both speech rights and the potential positive applications of AI through heavy-handed regulation. Hagerty argues against rushing into regulatory action without fully understanding the implications of emerging technologies, particularly when it could limit political speech.

Enforcement Options

Legal frameworks are being considered to regulate the use of deepfakes in political advertising while respecting free speech rights.

Technological solutions may provide a way forward. As AI continues to evolve, so do the tools to detect and authenticate content. The integration of blockchain technology could play a pivotal role in distinguishing genuine content from manipulated ones. However, as AI-generated content becomes more commonplace, there is a risk of the 'liar's dividend' – a phenomenon where the mere prevalence of fabricated content makes people more skeptical of anything they see – even true information. This could allow political actors to cast doubt on reliable information and claim that damaging statements they make are actually deepfakes.

Striking this balance demands a thoughtful and measured approach, as well as a comprehensive understanding of the potential remedies available under existing law. These may include making it easier for victims to seek recourse through defamation and privacy laws, data protection regulations, and intellectual property rights enforcement. Their applicability will depend on the specific circumstances of each case.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.