ARTICLE
10 March 2025

Managing AI Model Risk In Financial Institutions: Best Practices For Compliance And Governance

KR
Kaufman Rossin

Contributor

Kaufman Rossin, one of the top CPA and advisory firms in the U.S., has guided businesses and their leaders for more than six decades. 600+ employees deliver traditional audit, tax, and accounting, plus business consulting, risk advisory and forensic advisory services. Affiliates offer wealth, insurance, and fund administration. We’ve earned many awards, but we’re most proud of our Best of Accounting®️ Award for superior client service for four years running, because it’s based on ratings from more than 1,000 of our clients.
Artificial intelligence (AI) and machine learning (ML) have moved to front and center in financial institutions' Bank Secrecy Act/Anti-Money Laundering (BSA/AML) and Office of Foreign Assets Control (OFAC) systems and models.
United States Technology

Artificial intelligence (AI) and machine learning (ML) have moved to front and center in financial institutions' Bank Secrecy Act/Anti-Money Laundering (BSA/AML) and Office of Foreign Assets Control (OFAC) systems and models. AI/ML-based models present an opportunity to improve the efficiency and effectiveness of monitoring systems, but the complexity of these technologies have raised the stakes for model risk management.

Improper or insufficient model risk management can have significant consequences, such as erosion of regulators' trust, formal or informal regulatory actions including the possibility of expensive look-backs and remediation and regulatory fines, damaged reputation, financial loss, and internal inefficiencies.

Model risk increases with a model's inherent complexity, and AI/ML-based third-party models present particular challenges for financial institutions to understand and be able to explain how they operate. Nonetheless, regulators hold institutions responsible for mitigating risk and ensuring the conceptual soundness of any model or algorithm their systems use. As the April 9, 2021, Interagency Statement on Model Risk Management for Bank Systems Supporting Bank Secrecy Act/Anti-Money Laundering Compliance notes, "Banks are ultimately responsible for complying with BSA/AML requirements, even if they choose to use third-party models." Financial institutions must understand their AI-based systems and how they operate.

Proper data and governance are critical to successful implementation and ongoing use of complex AI/ML-based models. Financial institutions should mitigate model risk through controls that aim to keep systems operating as intended, with a focus on preventing inappropriate or incomplete data inputs or training data sets, biased models, insecure or error-prone models, and privacy breaches. Any model should be tested before implementation, with results evaluated and thoroughly documented.

Understand AI model risks

First, when the banking industry talks about AI models, the discussion is typically about models that use ML, which is a subset of AI that provides systems the ability to automatically learn and improve without being explicitly programmed. Few, if any, financial institutions are currently leveraging full AI. Very few vendors even offer full-AI models. Such models haven't been validated widely, if at all, and do not yet have the trust of institutions or its regulators.

AI/ML model risk mitigation isn't fundamentally different from risk mitigation for other third-party models. As the 2021 Interagency Statement notes, "Sound risk management practices include obtaining sufficient information from the third party to understand how the model operates and performs, ensuring that it is working as expected, and tailoring its use to the unique risk profile of the bank."

As with any model, understanding model risk begins with evaluating the model's conceptual soundness by looking at data integrity/representativeness, bias, model documentation/explainability, parameter and method selection, and training set curation. However, AI/ML models may require extra attention, as many are "black boxes" that use proprietary algorithms and can lack transparency; institutions may not know or understand the software's inner workings or how its model is calculated. In addition, the large volume of structured and unstructured data used to train AI/ML models can lead to issues of data integrity and data bias. It's crucial to obtain documentation from the vendor that explains how the model works and to have a process in place to validate that the model is working properly based on that documentation and how it is configured for your institution.

Mitigate AI/ML model risk with effective documentation

AI/ML models present two main sources of model risk:

  • Fundamental errors that produce inaccurate or incomplete outputs when compared to design objectives and intended use of a system or process
  • Incorrect usage or misunderstood limitations or assumptions

Regulators typically evaluate third-party models through the institution's model risk management documentation, which must be comprehensive and detailed so that a knowledgeable third party can recreate the model without access to the model development code.

In addition to meeting regulatory requirements, good documentation can improve communication with auditors and bank management and demonstrate to the institution's board how effective the model is in mitigating BSA/AML or OFAC risk.

Appropriate documentation demonstrates the model owner's:

  • Rationale: Why the institution chose this model over others, and the driving factors in terms of implementation
  • Assumptions: Which assumptions were made during design and deployment of the model, including underlying principles, data sources, variables and methodologies
  • Testing performed: Details of pre-implementation testing and validation of the model with the institution's data, including methodologies, results and adjustments made based on testing outcomes
  • Decision-making: Why the model was implemented and validated in this way and how those choices were arrived at including appropriate cut-over analysis and user acceptance testing

Institutions may face unique barriers to documenting AI/ML models, including use of invalid or inappropriate assumptions, lack of tech savviness, and lack of experience with complex models. Often, though, one of the most significant barriers is that information about the model must come from the vendor, which may be reluctant to share information or challenged to share it in a way that institutions and regulators can understand.

Ultimately, appropriate documentation of AI/ML models – much like selection, design and deployment – requires transparency between the vendor and the institution. This transparency should be bidirectional and may be enhanced by working with an experienced consultant who can clarify the information needed for effective documentation and "translate" between software vendor and institution.

The institution should clearly share its:

  • Risk profile
  • Business model
  • Technology capabilities
  • Needs and expectations with the vendor

The vendor should be clear about:

  • How its model works
  • How underlying assumptions are tailored for the specific institution
  • Any limitations the model has
  • How its technology platform works, and
  • Best practices for testing, implementing, using and tweaking the system

As with any model, documentation is only one part of risk mitigation

Pre-implementation testing, proper implementation and validation of the system are also important and should be documented. Particular attention should be paid to pre-implementation testing, as it is much easier and less costly to remediate issues prior to going live than it is to fix them after implementation.

Pre-implementation testing should include comparing the number of false positives and missed flags on the new platform to that of the current platform. An effective AI/ML platform should have fewer false positives and an overall higher effective ratio.

Once the system is running, ongoing monitoring, upkeep, tuning and testing are important for both risk management and regulatory compliance. Due to the pace at which AI continues to evolve and the higher risks sometimes associated with these models, regular independent validation is particularly important. At a minimum, the institution, or an outside consultant, should conduct annual reviews of each of its models to evaluate whether they still align with the institution's risks, expectations, current products or services, types of customers, and business activities. Annual reviews should also consider whether there have been any significant model changes that may require validation and whether new testing is warranted.

More complex AI models will require a different approach to validation

While AI/ML-based models may perform better than traditional ones in BSA/AML and OFAC monitoring systems, their lack of explainability can cause roadblocks to their effective use. An understanding of the unique risks of AI/ML models, transparent communication between vendor and institution, as well as effective documentation of the models, their validation and testing, and their implementation can improve both effectiveness and acceptance of these tools.

Looking ahead, expect AI-based models to become more complex, including the use of generative AI. This will require a different approach to validation and model risk, including risks associated with source and use of data, third-party access to institutional data to train generative AI, biased or discriminatory output, and deeper explainability issues, along with novel risks of this new technology.

Model validation will likely include looking more closely at areas such as data integrity and potential bias in algorithms. Future certification standards, such as ISO/IEC 42001 (Artificial Intelligence Management Systems) will allow for better governance of AI systems and help with meeting the needs of auditors, validators and regulators.

A risk advisory professional can provide invaluable assistance with model risk and validation, as well as with understanding how to meet auditors' and regulators' requirements while improving institutional operational effectiveness and efficiency.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More