- with readers working within the Consumer Industries industries
- within Criminal Law, Litigation, Mediation & Arbitration and Real Estate and Construction topic(s)
The integration of artificial intelligence into digital communication systems has transformed the creation, distribution and consumption of information. While this technological evolution has unlocked unprecedented opportunities for innovation and efficiency, it has simultaneously given rise to significant risks, most notably the amplification and rapid circulation of false or manipulative content. As AI-generated media becomes increasingly sophisticated, realistic and accessible, the boundary between authentic and fabricated material is becoming harder to distinguish.
Among the most concerning AI are deepfakes: highly realistic, AI-generated images, audio, or video content that resemble existing persons, objects, places, entities, or events which falsely appear to a viewer as authentic or truthful.1
What Challenges and Risks Do Deepfakes Present?
The practical risks posed by deepfake technology are already evident in Malta. A cryptocurrency scam was promoted through an AI-generated video falsely portraying the Prime Minister and other prominent local entrepreneurs and business-people endorsing separate investment opportunities.2 The incident demonstrates how synthetic media can be used to create a false sense of legitimacy, undermining traditional trust indicators and exposing weaknesses in due diligence and AML controls, particularly in digital and non-face-to-face financial transactions.
According to the PWC CEE 2024 ‘Global Economic Crime Survey', cybercrime – particularly impersonation scams involving deepfake technology – is the most frequently reported form of fraud in the CEE region. The ‘Global Digital Trust Insights Report 2025' further notes that security leaders believe generative AI and cloud technologies have significantly widened the cyber-attack surface in the past year, leaving organisations more exposed to advanced and complex threats.
How Deepfakes Alter Due Diligence
Deepfakes present a substantial risk to companies and organisations for vendors' management, as it increases the potential for fraud and demands heightened due diligence.3 From AI-generated voice scams to synthetic identities, the truth today can be easily manipulated, particularly because deepfakes can be created by anyone from their own home.
Conventional due diligence frameworks often assume that seeing or hearing a counterparty – whether during onboarding calls, identity verification checks, or transaction authorisations – provides a reasonable level of assurance as to that person's identity and intent. Deepfake technology disrupts this premise by enabling highly realistic synthetic audio and video that can convincingly mimic real individuals, including executives, customers, or public officials, thereby rendering non face-to-face or voice-based verification insufficient on its own. Therefore, organisations that regulate non-face-to-face transactions, transaction authorisations and ongoing customer verifications are considered to be most vulnerable to risk.4
As a result, due diligence can no longer rely primarily on human judgment or single-channel verification methods.
Practical Due Diligence Enhancements
- Multi-factor identity verification to reduce reliance on single-channel authentication and limit exposure to impersonation and synthetic identity fraud.
- Call-back authentication protocols for transaction approvals, particularly in remote or high-value transactions, to confirm instructions through independent communication channels.
- AI-powered detection tools embedded within internal control frameworks to identify manipulated audio, video, or image content in real time.5
- Updated policies for non-face-to-face transactions, ensuring procedures reflect emerging deepfake risks and evolving methods of digital deception.
These measures support regulatory compliance, reinforce directors' oversight and fiduciary responsibilities, and play a critical role in mitigating financial crime and operational risk in an AI-driven environment.
How are Legal and Regulatory Frameworks Addressing Deepfake Abuse?
Under the EU AI Act, deployers of artificial intelligence systems that generate or manipulate image, audio, or video content for the purpose of creating deepfakes are subject to specific transparency obligations. These transparency requirements, set out in Article 50 of the AI Act, will enter into force on 2 August 2026.
Beyond transparency obligations, certain deepfake systems may fall within the EU AI Act's high-risk classification where their use engages particularly sensitive or regulated contexts. This includes deepfake technologies deployed for biometric identification or categorisation, as well as systems used within critical infrastructure where malfunction or manipulation could affect essential services, including road traffic management or the supply of water, gas, heating, or electricity.
However, the AI Act remains largely focused on transparency and systemic risk management and does not directly address the use of deepfakes as a tool for transactional deception, with enforcement in such cases continuing to rely on existing financial crime, AML, and civil liability regimes.
The EU AI Act is not the sole instrument regulating AI-related identity fraud, as numerous jurisdictions have enacted anti-deepfake legislation over the past year.6
In June 2025, the Danish government introduced amendments to the 2014 Danish Copyright Act that would recognise a person's face, body, and voice as copyright-protected works. It would enable individuals to pursue legal action against the unauthorised creation of AI-generated depictions of their likeness or creative output. This law is expected to pass in 2026.
As of yet, Maltese law has no specialised standards or procedures for detecting deepfakes. We depend on experts who are to recognise when something is incorrect however this is a risky assumption because deepfakes are designed to deceive even trained eyes and ears.7 Discussions have cropped up in Parliament to follow the Danish Governments model however no law has been passed to date.8
Deepfake technologies fundamentally undermine traditional trust mechanisms that have long underpinned digital interactions, identity verification, and evidentiary reliability. In this evolving landscape, due diligence can no longer remain a static or purely procedural exercise, but must develop into a dynamic, technology-aware process capable of responding to increasingly sophisticated forms of deception. This, in turn, underscores the need for legal and compliance strategies grounded in foresight rather than reactive enforcement, enabling organisations to anticipate emerging AI-related risks and embed resilience within their governance frameworks.
Footnotes
1. Reg.EU 2024/1689
2. Micallef, D. (2025, September 26). Ukrainian woman accused of using deepfake PM video in cryptocurrency scam. Newsbook. https://newsbook.com.mt/en/ukrainian-woman-accused-of-using-deepfake-pm-video-in-cryptocurrency-scam/
3. Poynter, A. (2025, September 26). Vendor due diligence in the age of deepfakes and AI fraud: What you need to know. PaymentWorks. https://www.paymentworks.com/2025/09/26/vendor-due-diligence-deepfakes-fraud/
4. Cyber risks associated with Generative Artificial Intelligence and deepfakes. ACD. (n.d.). https://acd.mlaw.gov.sg/news/sector-developments/cyber-risks-associated-with-genai-and-deepfakes/
5. Cyber risks associated with Generative Artificial Intelligence and deepfakes. ACD. (n.d.). https://acd.mlaw.gov.sg/news/sector-developments/cyber-risks-associated-with-genai-and-deepfakes/
6. Deepfake regulations: AI and Deepfake Laws of 2025. Regula. (n.d.). https://regulaforensics.com/blog/deepfake-regulations/
7. Alec Sladden (2025, September 29). Deepfakes: Malta's hidden threat to justice. Times of Malta. https://timesofmalta.com/article/deepfakes-malta-hidden-threat-justice.
8. Lovin Malta. (2025, November 20). Watch: PN MP Julie Zahra urges Malta to clamp down on deepfakes. https://lovinmalta.com/news/watch-pn-mp-julie-zahra-urges-malta-to-clamp-down-on-deepfakes/
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]