Most people think of Google as being the full extent of the Internet. However, this is a common misconception since Google only accounts for about 20% of the Internet. The dark web and the deep web constitute the remainder of the Internet. The dark web is a section of the Internet where users can access websites anonymously. Due to its anonymity, the dark web has become associated with illegal and criminal activities. On the dark web, access to compromised web cameras through which unsuspecting victims are watched can be purchased. This has been an ongoing concern for many who have started taking measures to ensure security and privacy, such as covering up device cameras out of fear of being monitored.

The intersection of Artificial Intelligence (“AI”) and surveillance will be inevitable as society's digital footprint continues to increase and surveillance technology improves. A machine learning algorithm has recently been released which was trained on security camera footage in order to predict a person's bank PIN code. The model uses security camera footage to predict a person's bank PIN code based on how the person covers the keypad when entering their PIN at an ATM machine. This shows how AI algorithms can be trained on video footage to achieve specific goals or purposes. In this article, we discuss how artificial intelligence can be used on a web camera for malicious purposes, such as the creation of deepfakes.

What are deepfakes?

Deepfakes are artificially intelligent algorithms which replicate a person's likeness, such as voice and facial features. To launch a robust deepfake attack, a hacker will need access to a sizable amount of data on the victim, including pictures and videos which will be used to train the deepfake. For those in the public realm, such as celebrities and athletes, this data is readily available which is why we have seen an increase in reliable deepfake replicas. Many will recall the deepfake of Mark Zuckerberg in 2017 where he stated the following: “Imagine this for a second: One man, with total control of billions of people's stolen data, all their secrets, their lives, their futures.” Although deepfakes are in their nascent stages, with the majority of deepfakes being used for Internet trolling or in jest, it does not mean that deepfakes are harmless.

Deepfakes introduce a number of extreme risks including inter alia:

  • Reputational damage: deepfakes can be used to create fake videos or images that damage a person's reputation if they depict the person as engaging in inappropriate, illegal, or embarrassing activities;
  • Identity theft: deepfakes can be used to impersonate an individual, potentially leading to identity theft, fraud, or other criminal activities;
  • Privacy invasion: deepfakes can invade a person's privacy by superimposing their likeness onto explicit or intimate content without their consent. Again, this could also cause reputational harm;
  • Misinformation: deepfakes can spread false or misleading information;
  • Blackmail: hackers can use deepfakes to extort money or other concessions from individuals by threatening to release fake content;
  • Trust erosion and zero-trust society: deepfakes can erode trust, as people become increasingly sceptical about the authenticity of online media; and
  • Social engineering: hackers may use deepfakes to deceive others into taking actions they otherwise wouldn't, potentially causing harm to the individual or others. For example, a deepfake of a person's voice could be used to authorise a bank transfer.

As surveillance and people's digital footprints increase in tandem and AI becomes more advanced, it is arguably only a matter of time before their convergence. This may result in the creation of more advanced deepfakes which are harder to detect and identify, leading to higher levels of fraud and identity theft, and ultimately the erosion of trust resulting in a zero-trust society.

The bottom line is that the smaller your digital footprint, the more difficult it will be to create a deepfake of you. This is a factor which is currently impeding the large-scale adoption of deepfake technologies. However, this will change if hackers link AI algorithms to live webcam footage. Ultimately, this could result in more robust deepfakes which can be used to commit identity theft, fraud, or defamation.

For protection against these threats, we recommend the following:

  • Review accounts and delete any inactive accounts;
  • Regularly update passwords and do not use variations of the same password for different accounts;
  • Set up a privacy sticker to block webcams when not in use;
  • Install and regularly update antivirus and adblockers;
  • Avoid clicking on links unless the source can be verified;
  • Avoid accessing ‘untrusted' websites; and
  • Avoid downloading any attachments of emails originating from unknown senders.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.