ARTICLE
5 February 2025

AI And Machine Learning In Financial Crime Compliance

As financial crime risks evolve, including those risks posed by the use of AI and other emerging technologies, so too must firms' financial crime compliance response.
United States Criminal Law

As financial crime risks evolve, including those risks posed by the use of AI and other emerging technologies, so too must firms' financial crime compliance response. It is unsurprising, therefore, that AI forms part of both the problem and the solution. Accompanied by a governance model that has a "human in the loop" component, AI-enabled solutions can support more efficient and effective systems and controls for the prevention of financial crime risks. In this article we explore the current use of AI to combat financial crime and what lies ahead for firms using third parties to comply with their obligations in this area.

AI-enabled financial crime compliance

AI continues to dominate both professional and private conversation.The financial services sector have been early adopters to the technology, using AI widely, including within their risk and compliance functions.AI's role in fraud detection and prevention has developed rapidly: the Bank of England and FCA's third survey of AI and machine learning in UK financial services revealed that it is the third highest use case in financial services and is expected to increase in the next 3 years. Additionally, customer due diligence and transaction monitoring are two areas where AI can deliver significant operational efficiencies.

What is driving this trend? Financial crime plainly remains one of the most significant risks facing financial services firms and remains firmly within the regulator's focus – tackling financial crime is one of the main objectives in the FCA's new 5-year strategy 2025-2030. Firms are significantly motivated to adopt more sophisticated and dynamic ways to protect against financial crime risks, which AI-enabled tools offer.

Recent examples of the use of AI in this area are real success stories. AI-enabled tools have been used to:

However, these AI-enabled tools are inherently complex, requiring considerable resource and extensive market data to develop and maintain. Third-party providers of AI enabled tools designed to detect and combat financial crime have emerged, raising important questions of governance, accountability, expertise and, ultimately, regulatory compliance.

33%

A third of all current AI use cases deployed by respondents to the AI survey are third-party implementations.

50%

Almost 50% of respondents in the Bank of England's survey reported having only a partial understanding of the AI technologies they use, admitting "a lack of complete understanding" where these technologies are developed by third parties rather than internally.

The Reliance on Third Parties

A third of all current AI use cases deployed by respondents to the regulators' AI survey are third-party implementations, up from 17% in the regulators' equivalent 2022 survey. Risk and compliance was the business area with the second highest percentage of third-party implementations (64%), narrowly behind usage in HR (65%). The regulators anticipate that the use of third-party implementations will increase as AI models become more complex and outsourcing costs decrease.

Coupled with the likely increased reliance on third party providers in AI use, evidence suggests that firms have an inadequate understanding of how outsourced systems operate and are trained. For example, almost 50% of respondents in the Bank of England's survey reported having only a partial understanding of the AI technologies they use, admitting "a lack of complete understanding" where these technologies are developed by third parties rather than internally.

This is problematic given the regulatory requirements governing the oversight of outsourced functions, which includes use of third party AI tools. Such oversight cannot be properly exercised where the host firm does not understand the system. Firms will need to consider how the fundamental differences in AI, compared with algorithmic models before it, impact the discharge of their responsibilities. Whist the typical safeguards contained in an outsourcing agreement, i.e. provisions that stipulate audit rights and business continuity arrangements, may mitigate emergent risks in the area, a new suite of protective solutions and governance arrangements will be required.

This need is particularly apparent in respect of data. AI-enabled tools that rely on poor quality or incomplete data will, necessarily, produce poor quality outcomes. Data governance and standards have never been more important and will be a fast-developing field in the context of AI models.

Similarly, firms will need to implement measures to protect against third party models generating biased results, a likely focus of regulatory concern. The FCA has cautioned that firms using AI-enabled tools should consider whether they may lead to worse outcomes for some groups of consumers, in breach of the Consumer Duty, due to these technologies embedding or amplifying bias. Bias can occur at any point from the creation of the algorithm to its deployment. Incorrect problem-framing or reliance on datasets that are not representative of the firms' customer demographic will result in an inherently biased learning process and discriminatory outputs. Use of third-party models that rely on market-wide transfer learning in risk detection (i.e. leveraging pre-existing models trained on large financial datasets) may exacerbate this problem.

The use of third-party AI models may also present an accountability challenge, if not deficit, especially where the developers and providers are outside of the regulatory perimeter. Failures connected with the operation of financial crime systems and controls have already been a significant focus of regulatory enforcement action. The FCA's issuance of Final Notices to Metro Bank Plc and Starling Bank Ltd in late 2024 are stark examples. However, there have not been any related actions taken against Senior Managers, which may speak to the evidential challenges presented by such cases. Whilst, establishing individual accountability may become more difficult in respect of AI systems which have a significant third party component, regulators may closely scrutinise firms' standards of governance and oversight.

Firms will need to have staff with the requisite expertise and training, to ensure that auditing and oversight of the models, and their development, can be monitored effectively. They will also need to assess the appropriate extent to which "humans in the loop" should be incorporated into the operation of the model, i.e. where does the appropriate balance between efficiency and protection lie? Should failings emerge, unravelling what went wrong and who was ultimately responsible will be challenging due to the complexity of the models and number of parties feeding into the systems. Firms will need to be able to explain, in a meaningful way, the machine learning model and any resulting decisions, especially if they lead to consumer harm. Firms need also to design and implement feedback mechanisms to detect and prevent model drift, and enable prompt reaction to bias or new threats. These are significant questions, which will require input at a Board level.

Where an unacceptable accountability deficit emerges, the Regulator may shift its focus on the governance arrangements in firms, instead casting its perimeter wider, to include third party providers of AI financial crime systems within the regulatory fold. This has already been done in respect of critical third party providers, who supply material services to financial sector. That approach may be rolled out in other areas in which the industry's dependence on third party providers for complex AI systems becomes entrenched.

Conclusion

There is no doubt that AI can be a powerful ally in the fight against financial crime, helping financial institutions protect their customers and easing the burden on often thinly stretched risk and compliance teams. As we look ahead to 2025, firms need to consider how to balance the use of AI to boost their financial crime systems and controls with their compliance with core regulatory obligations, especially where they rely on third party providers. Accountability may prove the biggest challenge and firms will need to ensure proper governance and oversight arrangements, with meaningful input from senior management, of the third party AI models so that they can be effectively fine-tuned to their specific risks and emerging threats.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More