This article was co-written with Shane Riedel, CEO of Elucidate.
Introduction
As artificial intelligence (AI) becomes central to financial crime compliance -- powering fraud detection, transaction monitoring, and sanctions screening -- internal audit faces a new challenge: How do you provide assurance over something you cannot always see or explain? Does the internal audit function feel prepared for this challenge?
There is a great deal written about the use of AI to enhance the fight against financial crime, but less guidance is available for auditing those systems. The technologies provide powerful tools to detect suspicious activities with greater accuracy and speed than traditional methods. Reports forecast that the financial services sector will be one of the leading industries in AI investment, with billions dedicated annually to AI technologies. In 2027, the financial services industry is expected to spend just under $100 billion on AI, which represents nearly a 30% increase per year since 2023.1 Spending on AI for anti-money laundering (AML) and compliance represents a substantial portion of the overall investment. Financial technology research and advisory firm Celent forecasted that financial institutions would spend $34.7 billion on financial crime technology and $155.3 billion on operations in 2024,2 demonstrating the increasing reliance on AI to mitigate financial crime risks.
We explore how auditors can rise to the AI challenge and how they can use technology to enhance their own effectiveness.
AI and its dynamic subsets: Definition
AI is an umbrella term for any intelligent system performing tasks typically requiring human reasoning. It encompasses several widely used subsets, including:
- Machine Learning (ML), which is comprised of systems that learn from data and make predictions or decisions without being explicitly programmed, with the ability to update outputs over time.
- Robotic Process Automation (RPA) technology, which automates repetitive and rule-based, repeatable tasks, mimicking human actions within digital systems— often used in control testing and data collection.
Together, these technologies are reshaping risk management and challenging traditional audit methodologies. They should be viewed as complementary and not as replacing human capital, whether used by the auditor or the auditee.
Core Principles for Auditing AI in Financial Crime
As financial institutions deploy AI across compliance functions, audit teams must go beyond traditional controls and embed assurance into the design, use, and evolution of these systems. These five principles provide the foundation for effective audit oversight.
1. Model Validation & Testing
Auditors must ensure that all AI and ML models are rigorously tested before and after deployment. This includes:
- Validating model assumptions and architecture against documented design objectives.
- Testing performance against real-world data and known risk typologies.
- Back-testing historic cases to identify drift or degradation.
- Reviewing how the model evolves over time and whether changes are version-controlled and explainable. This is essential in high-risk applications like alert triage, transaction scoring, or sanctions filtering.
2. Data Integrity & Input Quality
AI is only as reliable as the data it learns from. Auditors should:
- Review data sources for completeness, consistency, and relevance to risk objectives.
- Identify and test the use of proxies, inferred attributes, or third-party enrichments.
- Ensure input data adheres to data privacy and security standards (e.g., GDPR). Audit teams may need to use analytics tools to recreate outputs from input data, especially where black-box models are used.
3. Transparency & Bias Detection
Auditors must demand transparency from models and those who build them. This includes:
- Requiring documentation of how decisions are made (e.g., decision trees or confidence scores).
- Identifying potential for algorithmic bias, such as proxy discrimination or disproportionate false positives.
- Evaluating whether fairness metrics or explainability tools (like SHAP or LIME) are integrated into the system.
- Testing whether decision logic can be overridden, reviewed, and understood by non-technical users.
4. Regulatory Compliance
Models must meet the letter and spirit of the law. Auditors should:
- Map outputs to AML/KYC obligations, sanction screening standards, and internal policies.
- Confirm that models operate within boundaries set by regulators, including where AI influences onboarding, risk scoring, or case closure.
- Understand how compliance obligations change across jurisdictions — and whether models reflect that.
5. Continuous Monitoring
One-time reviews are no longer sufficient. Auditors should:
- Confirm whether the organization uses dashboards and live diagnostics to monitor model outputs.
- Evaluate feedback loops where users can flag errors or override incorrect conclusions.
- Test how anomalies, drift, or failures are detected and resolved, and whether these are logged for audit trail purposes.
Governance: The Backbone of AI Assurance
Strong governance is not just good practice — it is a prerequisite for safe and effective AI use.
A sound AI governance framework should include:
- Clear roles and responsibilities for model ownership, performance review, validation, and change control.
- Cross-functional engagement, including compliance, technology, internal audit, and model risk.
- Change management controls, such as documented model updates, retraining logs, and approval processes.
- Training programs that ensure model users, reviewers, and auditors understand how to use and question AI outputs.
Crucially, governance must also promote external engagement. Stakeholders — including regulators, customers, and partners — need confidence in how AI is used. Transparency, documented policies, and open dialogue are essential to building trust.
Auditing the Future: Mastering Technical Innovations
Auditors need to gain a working knowledge of how AI models are trained, what data they rely on, and how to assess their effectiveness. Many firms are appointing AI audit champions — specialists who liaise with model risk teams and help translate technical issues into assurance insight.
At the same time, auditors are adopting analytics tools themselves — running parallel tests, recreating alert logic, and applying risk scoring to audit their audit universe.
A rigorous risk assessment will identify potential vulnerabilities, data integrity issues, and compliance risks, evaluating their implications on outputs and the organization's exposure to financial crime.
Empowering Auditors
Auditing AI and ML systems requires a unique set of skills and training due to the complexity and evolving nature of these technologies. Auditors should be trained to assess the transparency and explainability of AI models, which involves understanding how models make decisions and ensuring they can be interpreted and justified.
Revolutionizing Audits: The Innovation Edge
Innovation can also be used to enhance an auditor's activity by introducing tools to analyze data and effectively test below the line to identify emerging and unidentified risks. These tools could be used to re-perform the activity using automation, identifying areas and events where risk is undetected or incorrect conclusions are drawn. The added benefit of an automated tool is to quantify the risk, which creates a more powerful message than suggesting a breach of regulations or imposing fines.
Auditing tools that are emerging in the industry include:
- Data Analytics Platforms, which can empower auditors to detect patterns and anomalies.
- Blockchain technology, which can be used for speeding up audits and reducing manual errors.
- AP-Powered Risk Assessment, which offers real-time insights into the risk in diverse data sources, predicting vulnerabilities and allowing auditors to target critical areas faster.
- Robotic Process Automation, which can boost productivity and free auditors from repetitive tasks.
- Natural Language Processing, used to crack the code of unstructured data, and expose compliance breaches hidden in transactions.
- Predictive Analytics, which can enable auditors to forecast future risks and trends from historical data.
These tools and techniques significantly enhance the accuracy, efficiency, and scope of audits, enabling auditors to handle complex data, provide real-time insights, and focus on strategic analysis, ultimately leading to more robust financial crime detection and prevention measures.
Conclusion
AI is changing the game — not just for compliance, but for audit itself. By combining curiosity with technology and governance, internal audit can move from reactive review to proactive risk intelligence. In a digital-first compliance landscape, audit's role is not just to critique, but to champion safe, explainable, and effective innovation. AI can present a challenge for auditors , but also an opportunity to complement and enhance audit activities, freeing auditors to focus on higher value work.
Footnotes
1. Forbes AI In FinTech: Transforming The Way Business Owners Access Capital
2. Celent IT and Operational Spending on Financial Crime Compliance: 2024 Edition | Celent
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.