ARTICLE
23 January 2026

Corporate Fraud And Institutional Liability In The Age Of Deepfakes

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
When discussing the litigation challenges deepfake technology poses, one typically thinks of difficulties protecting individual privacy rights, complications prosecuting anonymous actors, the lack of law regulating deepfake usage outside commercial context, and authentication and evidentiary issues.
United States Corporate/Commercial Law
Randy Boyer’s articles from Sheppard Mullin Richter & Hampton are most popular:
  • in United States
  • with readers working within the Law Firm industries
Sheppard Mullin Richter & Hampton are most popular:
  • within Compliance topic(s)

When discussing the litigation challenges deepfake technology poses, one typically thinks of difficulties protecting individual privacy rights, complications prosecuting anonymous actors, the lack of law regulating deepfake usage outside commercial context, and authentication and evidentiary issues. Less prevalent is a discussion of potential liability companies face for fraud perpetrated on their customers.Yet, the proliferation of deepfake fraud has hit the corporate world at a time when courts are more willing to hold companies responsible for protecting their customers from fraudsters.In addition to protecting against fraud on the company, businesses are also well advised to protect their customers as well.

The Rise of Deepfake Corporate Fraud

Deepfake deception schemes are already widespread and poised to increase.The most headline-grabbing case involved a finance employee in Hong Kong, participating in what appeared to be a routine video conference with the company's Chief Financial Officer and several colleagues. Pursuant to instructions received on that video conference, the employee authorized fifteen separate transfers totaling nearly $25 million to five local bank accounts.However, the CFO, the colleagues, their voices, their mannerisms—were all AI-generated deepfakes.1

According to research, 25.9% of executives report that their organizations have experienced one or more deepfake incidents,2 while other studies suggest 92% of companies have experienced economic loss due to a deepfake.3 Further, the accessibility of generative AI technology is only growing, lowering barriers to entry for deepfake creation.4

The Emerging Framework of Institutional Liability

At the same time, the legal landscape surrounding institutional responsibility for fraud is rapidly evolving.Some regulators have started to express that institutions, with their superior resources, technological capabilities, and access to industry-wide threat intelligence, bear an affirmative duty to implement controls commensurate with evolving threats.5 There are also indications of a greater willingness in the courts to require institutions implement adequate safeguards to protect their customers from third-party fraud.6 The theory behind this shift is that institutions—particularly financial institutions—should play the biggest role in safeguarding against deepfake-driven fraud given their direct handling of financial transactions and sensitive customer data. These institutions are most equipped to implement identity verification processes to account for AI-powered advances ins facial recognition, voice analysis, and behavioral biometrics.

Corporate Liability for Deepfake Consumer Frauds

Imposing liability on corporations for third-party fraud against customers creates the potential for significant litigation. At the forefront of this new liability are financial institutions, but it is not difficult to imagine an expansion to other sectors—especially given the ubiquity of companies facilitating online payments from customers. As technology makes traditional scams more sophisticated and harder to detect, companies may need to implement affirmative countermeasures to protect their customers.

Further, deepfake technology opens the potential for larger scale scams. Deepfake videos of corporate managers could impact market perception and create securities risks. Deepfaked advertisements could lead to consumer protection litigation. Deepfaked audio in robocalls could result in TCPA litigation. Deepfake technology vastly expands the potential for imposter-based scams on customers that companies may have to affirmatively guard against.

What can mitigate the risk?

Companies should already be implementing internal controls and systems within their organizations to protect the organization itself from deepfake fraud.Applying these same principals for internal relationships to external relationships with customers are the best ways to mitigate risk.

  • Ensure payment portals and vendors employ industry standard identity verification processes.
  • Educate customers about threats and best practices to avoid scams.
  • Limit the number of channels used to communicate with customers and only use official channels to communicate with consumers.Educate customers that communications will only come through official channels.
  • Proactively monitor for scams and affirmatively respond to correct any misinformation in the market.Where fraudsters can be identified, consider taking affirmative legal action.

The Landscape Ahead

Deepfake fraud is no longer a speculative risk but an operational reality that demands immediate, comprehensive response.Successfully navigating this new landscape will require deepfake prevention as a technological investment, process redesign, and cultural transformation.Those organizations that recognize these new threats as a fundamental challenge to business operations and respond with commensurate investment and cultural change will position themselves not only to avoid liability but to maintain the trust that modern business depends upon.

Footnotes

1. Arup lost $25mn in Hong Kong deepfake video conference scam, The Financial Times, May 16, 2024, available athttps://www.ft.com/content/b977e8d4-664c-4ae4-8a8e-eb93bdf785ea.

2. Generative AI and the fight for trust, Deloitte, May 2024, available at https://www.deloitte.com/content/dam/assets-zone3/us/en/docs/services/consulting/2025/generative-ai-and-the-fight-for-trust.pdf

3. 92% of companies have experienced financial loss due to a deepfake, CFO.com, Nov. 6, 2024, available at https://www.cfo.com/news/most-companies-have-experienced-financial-loss-due-to-a-deepfake-regula-report/732094/

4. See id.

5. See Fed's Barr: Banks are frontline defenders against deepfake-enabled fraud, ICBA.org, April 18, 2025, available at, https://www.icba.org/w/fed-s-barr-banks-are-frontline-defenders-against-deepfake-enabled-fraud.

6. See, e.g., Yuille v. Uphold HQ Inc., 686 F. Supp. 3d 323, 337 (S.D.N.Y. 2023).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More