With the rise of generative artificial intelligence (AI) and its various synthetic media outputs, deepfakes are just one of many new risks to businesses. Deepfakes pose considerable threats to companies, potentially damaging reputation, trust, and financial stability through malicious impersonation and manipulation of digital content.
Deepfakes are artificially-constructed images, audio recordings, or videos. AI creates deepfakes through a process referred to as "deep learning." Essentially, the AI-trained machine is given examples of images, audio, or videos, and replicates those images to create an imitated version of the sample material. Often, deepfakes are used to replace the likeness of one person with another through a digital media medium.
Tort and criminal law are areas of the legal field where deepfake technology has already intersected and raised points of concern. Using deepfake media could trigger defamation, false light, and intentional infliction of emotional distress tort claims, or a fraud action in the criminal law space. Additionally, trademark and copyright infringement claims are likely to increase as AI continues to improperly access trademark-protected images and logos, or copyrighted music and sound clips as material for use in deepfakes.
Consider the following practical examples of how a deepfake could impact a business.
- Imagine a scenario where a deepfake is used to impersonate a customer service representative of a credit union. A bad actor could create a convincing video or audio clip of a customer service agent providing incorrect information about account balances, interest rates, or loan terms to member clients. This misinformation could lead to confusion and frustration among members, potentially causing them to make improper financial decisions based on false information.
- A deepfake could be used to create a video where the CEO of a corporation announces a financial crisis or fraudulent activity within the organization, whether that be an internal scam to employees or externally spread to customers. This misinformation could cause panic among the public, leading to reputational damage and potential regulatory scrutiny. From an optics perspective, it could also undermine the trust of stakeholders in the corporation's leadership and stability.
- Imagine you receive an email that appears to be from your usual point of contact with a vendor, complete with their familiar email signature. The message is friendly and conversational, stating that there's a small issue with an overdue invoice that needs your attention. They emphasize how much they appreciate your partnership and mention that settling this promptly will help ensure uninterrupted service. The email encourages you to call a specific phone number to discuss the matter further. Trusting the familiar name, you dial the number, only to find yourself speaking to a deepfake that sounds just like your contact. The conversation feels genuine, and you're convinced to provide payment details, unknowingly authorizing a payment that goes straight to the scammer instead of the legitimate vendor.
These examples are not meant to scare companies away from using AI. Instead, with protection measures in place, AI can be utilized to detect and prevent deepfakes from disrupting a business' operations and stability. Proactive measures an institution may take to mitigate the risks of deepfakes include the following:
- Raise Awareness and Training: Educate employees about the existence of deepfakes, their potential impact, and how to recognize them based on the industry a business operates in. Provide training on verifying the authenticity and source of digital content; encourage skepticism when encountering unfamiliar or suspicious communications; and consider implementing an AI policy to guide employees' use of technology in the workplace.
- Monitor Online Presence: Have a member or sector of an organization's team regularly monitoring online platforms and the business' social media presence for any unauthorized or manipulated content that could potentially harm the company's reputation or deceive stakeholders. AI tools or services themselves can assist in detecting deepfakes. The same machine can be used to create and expose deepfakes!
- Implement Verification Processes: Establish rigorous verification procedures for high-risk transactions, such as financial transfers or changes to corporate policies. Utilize multi-factor authentication and other security protocols to confirm the identity of individuals involved if such processes are handled through digital channels.
- Enhance Cybersecurity Measures: Strengthen cybersecurity defenses to protect against unauthorized access and manipulation of digital assets, including employee and customer data. Regularly update software, employ encryption technologies, and conduct vulnerability assessments to identify and mitigate potential weaknesses in your institution's tech. Cybersecurity policies are another measure that could be instituted to promote corporate security.
- Collaborate with Experts: Work with cybersecurity experts, legal advisors, and technology partners who specialize in deepfake detection and mitigation. Stay informed about emerging technologies; local, industry-specific, state, and federal AI guidelines; and best practices for safeguarding against digital impersonation and manipulation.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.