In an era where the veracity of visual and audio evidence has historically been taken for granted, the rise of deepfakes presents a critical challenge. Here, we explore the impact of seemingly undetectable deepfakes on society, investigations, and disputes, examining how the prevailing trust in what people see or hear can be – and increasingly is – exploited.

Deepfakes: The emergence of synthetic media

Deepfakes are a type of synthetic media that use Artificial Intelligence (AI) to create realistic images, videos, or audio recordings that are not real. The term "deepfake" comes from the blending of "deep learning" and "fake" into the now infamous portmanteau.

Continued advancements in AI technology have ushered in a new era of deepfakes, where the creation of convincingly manipulated media has become increasingly accessible. With minimal technical expertise and just a basic understanding of the appropriate software, individuals can now generate deepfakes with relative ease. Often this software is available for free or at minimal cost, and there are many videos available on popular video-sharing platforms showing how to get started.

While amateur attempts at generating deepfakes may still exhibit noticeable flaws, the rapid progress in AI technology raises the alarming possibility that these fabrications could soon become indistinguishable from genuine articles. Combining the deepfake technology with a convincing narrative can dupe people into believing what they are watching is real, as a relatively recent deepfake video posted to Facebook demonstrated.

The technology behind deepfakes: How machine learning algorithms create convincing counterfeits

Deepfakes rely on machine learning algorithms, specifically deep neural networks, to analyse and manipulate existing images, videos, or audio recordings. These algorithms learn to mimic the behaviour of a person or object and then apply those learned patterns to the creation of a highly realistic counterfeit.

Creating persuasive deepfakes necessitates a significant volume of training data, typically consisting of numerous images or videos featuring the targeted individual. By utilising this data as a reference, deep learning algorithms can discern and capture the distinctive attributes and facial expressions specific to the person being impersonated. Continuous refinement and incorporation of feedback by the creators plays a vital role in improving the quality and authenticity of the deepfake. Although deepfakes are commonly associated with image and video manipulation, parallel techniques can be applied to audio, enabling the production of manipulated voice recordings or synthesised speech.

Realism in deepfakes is achieved through the utilisation of Generative Adversarial Networks (GANs), a technique that involves generating new images based on source data and subjecting them to ongoing evaluation. This iterative process of "discrimination" rejects results that fail to meet certain criteria, allowing for the continual enhancement of the output quality. As the generation and discrimination cycles persist, the deepfake gradually approaches a point where it becomes virtually indistinguishable from authentic media.

The technology enabling their creation is advancing rapidly, with increasingly advanced open-source AI tools available allowing generation of deepfakes for free, as reported by the New York Times.

Spotting the unseen: Identifying deepfakes in the age of AI

With significant advancements in this area, even shallow fakes are too good to spot on a cursory examination. However, there are a few basic methods that may be employed to identify deepfakes and, to some extent, prevent them from being shared further. For instance, one can evaluate the video's lighting and shadows, search for inconsistent motion or facial expressions, and listen for oddities in the audio. Examination of the face and how the person blinks is a good place to start, since high-end deepfake manipulations almost always involve facial alterations. Observing any mismatch or lack of synchronisation between the movements of the lips and the words being spoken could serve as a clue.

Aside from just relying on our senses to detect deepfakes, there are also improvements being made in deepfake detection algorithms, such as the one developed by Stanford University, which essentially uses AI to detect deepfakes created by AI. However, as some digital-forensics experts estimate, people working on video synthesis outnumber those working on detection 100 to 1. A recent evaluation of Intel's "FakeCatcher" by BBC News showed a less-than-perfect result, even identifying some authentic videos as fake.

The dual nature of deepfakes: Societal risks and opportunities

Some experts estimate that as much as 90% of online audio-visual content could be synthetically generated within the next few years. This "unchecked" rise of deepfakes could have wider societal impacts, such as eroding trust in media and institutions. People may no longer have a "shared reality" and may revert to only trusting what they have seen (or people that they know and trust have seen). The World Economic Forum's recently published report also sheds lights on some specific conduct risks presented by deepfakes. These might range from financial fraud using impersonation, to social media catfishing which involves exploiting people for money or gifts, to electoral manipulation by swaying public opinion using doctored videos of political figures. This was exemplified in a recent report on a state-aligned campaign, where deepfakes were used to spread misinformation using deceptive political content.

On the flip side, there are also potential pro-social applications of deepfakes, such as creating more realistic virtual experiences or improving accessibility for people with disabilities. For example, a British start-up company has been developing a retail app that would let users upload videos of their faces and create deepfake outputs in minutes, substituting the model with the user. In medicine, Project Revoice was launched to help people with Amyotrophic Lateral Sclerosis (ALS) regain their voices that they lost to the disease. While a few years ago, it was state-of-the-art technology to hear the renowned physicist, Stephen Hawking "speak" in his robotic, computer-generated voice, today's initiatives such as Revoice can help restore a patient's natural voice.

Overall, the impact of deepfakes on society is complex and multifaceted, but one fact that cannot be denied is its dual nature, in that deepfakes will usher in new opportunities, as well as new dangers. It is therefore critical to ensure people know when a photo or video is generated by AI, as emphasised by Microsoft's president, Brad Smith, in a recent speech in Washington.

Deepfakes in the Courts: Impact in the world of investigations and disputes

Deepfakes have the potential to impact the world of investigations and disputes by creating a digital fact base that can be subject to fakes. When the line between real and imitation becomes blurred, the consequences of false accusations, incorrect judgements, and an erosion of trust in our judicial system are all too real.

In criminal cases, there is a rebuttable presumption that computers operate correctly in producing electronic evidence. However, there are multiple cases and investigations which exist as evidence against this presumption, even without malintent of the involved parties.

Audio-visual material isn't viewed in the same light as other computer outputs such as reports. We can all understand the risk of nefarious doctoring of an accounting report in a corporate fraud investigation, but it becomes much trickier when our eyes and ears are being deceived.

As individuals increasingly lack the ability to spot deepfakes, detection algorithms will be required to consistently and accurately flag faked evidence. In the same way that data experts are required to provide expert testimony on the sanctity of data, digital forensic experts will be required to validate the authenticity of audio-visual material using these detection algorithms. Aside from the increased time and expenditure in legal proceedings, the most troublesome aspect could prove to be the cat-and-mouse game between the good and the bad players in leveraging technological development.

While we continue to see significant breakthroughs in the development of deepfake technology, there is a real risk of detection algorithms lagging for extended periods of time. Such exposure calls into question the reliance that can be placed on audio-visual evidence, if any. Sufficient doubt can be raised by defendants submitting a "deepfake defence", claiming that any audio-visual material is not authentic. For the claimant, the burden of proof then becomes one of proving a negative, i.e., that the material is not fake, which may be legally impossible.

It is likely that many investigations and/or disputes concluded on the basis of audio-visual evidence will quickly be appealed once detection technology makes its next significant leap. How many times this happens before an equilibrium is reached is unclear, but what is clear is the negative impact on judicial proceedings if allowed to subsist.

Looking ahead: The rise of a zero-trust society?

Use of the term "fake news", which the former US President wrongly claimed to have first coined, sought to undermine trust in modern media by casting doubt on the honesty of reporters and journalists. Deepfakes have the potential to amplify this effect and create a zero-trust society where people cannot, or are unwilling to, distinguish between the real and the fake.

Aside from the evident issue of faked realities, there exists the issue of real realities becoming plausibly deniable. We are already at the stage where individuals are easily duped by deepfakes, and detection algorithms are struggling to keep pace with the developing technology. If distinction proves unreliable, or the associated time and costs are impractical, the default defence will be to cry "deepfake". Shocking footage which showed extrajudicial executions by Cameroon military personnel was initially dismissed by the Ministry of Communication as being fake. An investigation by Amnesty International experts gathered extensive and credible evidence that suggests the footage is genuine. There are numerous other such examples, and they are likely to continue while denial remains plausible.

This obfuscation of reality in both directions will have a profound and far-reaching impact on investigations and disputes, affecting politics, journalism, and legal proceedings.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.