- with Finance and Tax Executives
- with readers working within the Banking & Credit, Pharmaceuticals & BioTech and Property industries
Introduction
The regulatory landscape related to artificial intelligence continues to evolve. For regulated entities like Canadian banks and insurance companies and their foreign affiliates, for federally regulated credit unions, and for Canadian affiliates of foreign banks and insurance operating in Canada ("FIs"), OSFI Guideline E-23 – Model Risk Management (2027), which will come into force on May 1, 2027 ("Guideline E-23"), adds a new layer of regulatory and contracting complexity.
Contracts between service providers and FIs are already complicated because of the requirements of FIs to comply with Canadian privacy laws such as PIPEDA and privacy laws in other jurisdictions in which they operate, regulatory guidelines such as OSFI's updated B-10 Third-Party Risk Management Guideline, OSFI's Technology and Cyber Security Incident Reporting Advisory, and other guidance. For financial institutions that contract for services in multiple jurisdictions such as in the United States, EU, UK, or Asia, there are also additional regulatory requirements that have to be met. Some are specific to privacy or regulation of financial institutions. Others are AI specific such new laws governing AI such as in the European Union, South Korea, and numerous State laws in the United States. There also also a myriad of frameworks for model AI governance including recently, in connection with agentic AI, India's and Singapore's Model AI Governance Framework(s)for Agentic AI.
Guideline E-23 has already started to materially impact agreements and contractual negotiations with third party providers. Because the breadth of models that fall within the guideline can traverse almost all business functions that are powered or supported by AI and other models, third parties who supply products and services to FIs must satisfy them that their models have been developed, deployed, and are monitored in ways that enable FIs to comply with the guideline and their AI and model policies. Many third party suppliers have not yet developed processes or contractual terms intended to support the new Guideline E-23 requirements. This makes contracting in the current environment difficult. This is the focus of this blog post.
Questions answered in this post
- What is OSFI Guideline E-23?
- What is the purpose of Guideline E-23 and what risks is it designed to address?
- How broadly does OSFI define a "model" under Guideline E-23?
- Does the definition of "model" extend beyond traditional credit, capital, and actuarial models?
- Are AI and machine learning systems captured under Guideline E-23?
- Whose models fall within the scope of E-23, including internally developed and third-party models
- Does Guideline E-23 apply to models embedded in vendor products and services?
- Can E-23 apply to outsourcing, managed services, and cloud service arrangements?
- Does model-driven IT services potentially impact privacy, cybersecurity, and operational resilience
- What types of third-party service providers are likely to be affected by E-23?
- How does Guideline E-23 interact with OSFI's B-10 Third-Party Risk Management Guideline?
- What new governance, documentation, and oversight requirements does E-23 impose on financial institutions?
- What information, transparency, and assurances will financial institutions increasingly require from service providers?
- How is Guideline E-23 affecting contracting practices and negotiations with suppliers?
- Is E-23 complicating and, in some cases, slowing the adoption of AI-based products and services?
- What are the broader regulatory and commercial implications of OSFI's expansive approach to model risk management?
Purpose of Guideline E-23
The purpose of the guideline is to set out OSFI's expectations for effective enterprise-wide model risk management (MRM) for FIs. The Guideline is concerned with how use of models in their businesses can impact risk. The essential purpose is to ensure that federally regulated FIs "should be cognizant of how the use of models in their business can impact their risk profile and should have effective risk management practices to mitigate the risks." OSFI's expected outcomes are that "[m]odel risk is well understood and managed across the enterprise."
What types of models are regulated by Guideline E-23?
The technologies regulated under Guideline E-23 are defined as "models", a very broad term that is defined as:
An application of theoretical, empirical, judgmental assumptions or statistical techniques, including AI/ML methods, which processes input data to generate results. A model has three distinct components:
- data input component that may also include relevant assumptions,
- processing component that identifies relationships between inputs, and
- result component that presents outputs in a format that is useful and meaningful to business lines and control functions.
Guideline E-23 adopts an expansive definition of what constitutes a "model". While many people instinctively think of a model as a complex quantitative engine—such as a credit loss forecasting model, or an actuarial reserving model—E-23 goes far beyond that traditional framing. Under the Guideline, a "model" is essentially any structured analytical method that takes input data, applies some form of processing logic, and generates results that are meaningful to business lines or control functions. The definition captures not only traditional statistical techniques but also AI and machine learning methods.
What makes this definition of model particularly sweeping is that it captures models that are based on theoretical, empirical, or judgmental assumptions as well as models grounded in formal statistical approaches. The emphasis is not on whether the method is "advanced", but rather whether it processes inputs through defined relationships to produce outputs that influence decision-making, operational actions, risk control, or reporting. In other words: if a bank or insurer relies on a systematic analytical process to generate risk metrics, pricing outputs, forecasting estimates, classifications, rankings, or scores, OSFI may treat it as a model—whether the underlying logic is an AI model, a neural network, or some other analytical tool.
This expansiveness of the definition of model is reinforced by the three components of models: (a) a data input component (which may include assumptions), (b) a processing component that defines the relationship between inputs, and (c) a result component that presents outputs in a decision-useful form. Many operational and business tools fall into this definition. A "model" may be embedded in software, implemented as a spreadsheet, delivered through dashboards, or consumed through API-based services.
A model does not have to be labelled a "model" to be regulated as one. Many tools with nomenclature such as "analytics", "risk scoring", "optimization", "monitoring", "forecasting", "insights", "advanced delivery tools", or "AI" may be models under Guideline E-23.
Whose models fall within Guideline E-23?
Guideline E-23 applies to models developed or used by FIs. It includes internally developed models and models licensed or procured from third parties. It extends to models made available under a myriad of third party arrangements including models built using third party AI systems, off-the-shelf supplier AI products, supplier AI systems embedded in supplier products and services, and third party AI systems embedded in supplier products and services.
While Guideline E-23 refers to "models" as if they operate independently, the guideline would also apply to agentic AI solutions, being AI based systems that can autonomously plan, make decisions, and take actions toward goals by coordinating multiple models, tools, and processes with minimal human intervention.
The scope of models is made clear by Guideline E-23 which states: "Institutions should have defined processes to periodically identify models used throughout the enterprise, including vendor and third-party models" and "Institutions should establish an MRM framework that ... covers models or data sourced from external sources like foreign offices or third-party vendors (pursuant to our Guideline B-10 Third-Party Risk Management Guideline)."
The reference to the OSFI B-10 Guideline illustrates the expansiveness of the possible third party arrangements under which models are sourced. The B-10 guideline addresses all third-party arrangements and includes outsourced activities, functions, and services that would otherwise be undertaken by the FI itself including arrangements that could introduce third party risks such as technology, cyber security risks, information security, data management and privacy risks, business continuity, ESG, reputational, strategic and financial risks. Examples could include, for example, those related to:
- Credit bureau / credit decisioning providers
- IFRS 9 / CECL modelling and analytics vendors
- AML / sanctions screening and monitoring vendors
- Fraud detection platforms (payments/cards)
- Insurtech underwriting / pricing engines
- Market risk and capital analytics providers
- Climate risk modelling firms
- Customer analytics / personalization providers (AI-driven)
- HR / workplace analytics vendors
- Vendor "software platforms" with embedded analytics such as treasury/ALM software, risk dashboards with forecasting engines, portfolio optimization tools, collection/recovery prioritization tools
Guideline E-23 can apply to IT services
A particularly important — and often underappreciated — implication of OSFI's broad definition is that it can apply well beyond the conventional "risk modelling" universe. Guideline E-23 does not limit the concept of "model" to FI traditional risk assessments. It also captures any analytical method that processes data using theoretical, empirical, judgmental or statistical techniques to produce outputs meaningful to business lines and control functions. OSFI makes this clear in the guideline stating:
Model risk involves the risk of adverse financial impact (for example, inadequate capital, financial losses, inadequate liquidity, operational, or reputational consequences) arising from the design, development, deployment, and/or use of a model. This is the inherent risk of using a model and refers to the fundamental characteristics of the model and materiality to the institution.
Residual model risk for the purpose of this guideline refers to the risk that remains after institutions have implemented controls, validation processes, monitoring, and other risk-mitigating measures. Thus, residual model risk captures the portion of risk that continues to exist despite institutions' best efforts to identify, measure, and mitigate model risk.
This means that model risk governance may extend into domains that are increasingly delivered through third parties such as information technology services, outsourcing, managed services, and cloud platforms—particularly where these services embed algorithmic decisioning, scoring, classification, prediction, optimization, or operational control functions.
In modern outsourced IT environments, service party providers frequently embed models inside core service functions. For example, managed security service providers (MSSPs), cloud security management platforms, and endpoint detection and response tools often rely on machine learning to classify activity as benign or malicious, to prioritize incident response, or to trigger automated containment actions. Under Guideline E-23's definition, these systems may constitute "models" because they process input data (logs, telemetry, endpoint signals), apply a processing methodology (rules, correlation logic, behavioural baselines, anomaly detection, supervised learning classifiers), and generate outputs that are meaningful to the organization (alerts, risk scores, block/allow decisions, escalation thresholds). For FIs, those outputs can materially affect confidentiality, integrity, and availability of systems and services—meaning model risk becomes operational risk.
OSFI's definition also intersects directly with privacy risks arising from use of models in IT outsourcings. Many outsourced IT providers use tools that automatically identify, classify, tag, or route data. These services could incorporate models that determine whether content constitutes personal information, confidential business information, or regulated financial data. A misclassification by a model could have downstream consequences; personal information could be exposed, transmitted or processed in the wrong jurisdiction, shared with an unauthorized processor, or retained longer than permitted. From an OSFI perspective, the technology provider may be "just an IT service provider", but if its embedded model is driving decisions about personal data flows, the FI has effectively outsourced a model-driven compliance function.
Similarly, model-driven IT services can directly affect security outcomes. In outsourced or cloud environments, model outputs may govern identity, access, and authentication decisions. A provider's fraud engine may determine whether a login attempt is suspicious; an identity platform may decide whether to trigger step-up authentication; an adaptive access model may throttle access based on device or geo-risk; a bot-detection engine may block traffic; a vulnerability prioritization model may determine patch sequencing. Each of these cases reflects model inputs, processing logic, and decision outputs. If a model has poor accuracy or is not appropriately governed, it could lead either to unauthorized access or disruption of legitimate activity. Either outcome potentially engages operational or reputational risk and could have direct implications for incident management and cyber resilience.
Continuity of operations and operational resilience bring these issues into even sharper focus. Outsourced IT service providers commonly rely on forecasting and optimization models—such as predictive autoscaling, load balancing, capacity planning, failure prediction, and automated rerouting models—to maintain service availability. While these may be marketed as "advanced delivery tools", E-23's definition of models suggests they can be models if they convert input data (utilization metrics, historical demand patterns, error rates) into outputs that guide operational decision-making (resource allocation, traffic routing, failover initiation). If the model is miscalibrated, biased by incomplete data, or used outside of its intended domain (for example, during stress events), it can contribute directly to outages, degraded service, or cascading failures—the types of events OSFI expects institutions to anticipate and govern for under both OSFI Guidelines E-23 an B-10.
From a governance perspective, the consequence is that model risk management may need to be tightly integrated with third-party contracting and technology risk programs. A cloud contract or IT services agreement might not refer to any "model" at all, yet still provide outputs generated by embedded models that materially affect security controls, privacy compliance, and availability of critical services.
Guideline E-23 effects on contracting and AI adoption
Guideline E-23 sets out in detail how OSFI expects FIs to manage the risks associated with models. As the risks to be managed relate to internal and external models (including models used by service providers to FIs), FI agreements with service providers need to address E-23 compliance. This is complicating and protracting the negotiation of IT agreements. In some cases, the new governance requirements are delaying and even preventing FI's adoption of AI based products and services. This is a problem OSFI may not have anticipated when in introduced and finalized the Guideline. These problems can be seen by examining the E-23 requirements and then mapping those requirements onto what information, documentation, and assurances service providers must provide FIs for them to be in a position to comply with the guideline.
Guideline E-23 is binding only on FIs and not on their suppliers. However, FIs cannot comply with the guideline without specific contractual commitments regarding models from their third party suppliers. As the list of E-23 MRM requirements make clear, FIs have numerous obligations to assess, monitor and mitigate risks associated with models before they and deploy and use them. However, when these models have been developed by 3rd party suppliers, FIs can only satisfy their MRM obligations by working with their suppliers to ensure that the MRM requirements are met.
Because of the diversity of models, there is no "one size fits all" approach. In some cases, such as where models are embedded in supplier systems and are used to control operational processes, many of the MRM governance processes must be flowed down to the supplier which also has to provide information and assurances to FIs about their models. In other cases, FI's main needs may be information, documentation, and assurances related to AI safety and responsible AI practices to enable them to decide whether to deploy or use models. However, because many suppliers now include in IT and other service offerings a variety of AI systems and other models, coming up with a calibrated approach that fits all service offerings is a real challenge.
What also makes negotiations between FIs and their suppliers challenging is that FIs and suppliers often disagree on their risk assessments of models and have different MRM frameworks. Under the guideline, FI's decisions with respect to the use of models has to be based on their broader risk and governance frameworks, their appetite for model risk, and their own MRM policies. These may be different from FI to FI and between FIs and suppliers which have different imperatives and business models. Some suppliers' frameworks are not aligned to any recognized standards and some suppliers are reluctant to commit to specific frameworks given how fast AI and the frameworks are evolving.
Some FIs push for transparency and are increasingly pushing for their service providers to comply with recognized risk management frameworks such as those published by NIST or ISO, and by applicable voluntary governance frameworks. Some insist that they will at least have the right to opt-in or opt out of the use of models and rights to prevent suppliers from introducing new models into services without their consent.
The bottom line is that FI are able to comply with OSFI's requirement to implement and ensure that MRM requirements are met I proportionate ways.
A risk-based approach is implemented and ensures MRM requirements are proportional to the level of model risk identified by the institution. A risk-based MRM framework is documented and implemented in a way that enables consistency across functional and business units. Institutions identify sources of model risk, and ensure adequate resources are allocated to manage, mitigate, or accept those risks as appropriate.
I set out below a list of Guideline E-23 MRM requirements. FIs that contract with third party suppliers must consider what terms need to be negotiated to enable them to comply with the guideline for all of the different products and services that use or access models.
- FIs must identify, create an inventory, and track models at all lifecycle stages (development, deployment, monitoring, decommissioning).
- FIs must assess risks associated with models prior to deployment. This involves assessing the risk of adverse operational, or reputational consequences, and includes performance of risk assessments prior to deployment such as cybersecurity risk, infrastructure vulnerabilities, and other potential operational risks.
- FIs must be aware of a given model's intended use, inherent limitations, and potential negative outcomes to their business.
- FIs must have a model review process that includes the
following:
- Validates that models are properly specified, working as intended, and fit-for-purpose.
- Evaluates the level of explainability for the model workings as per the intended use of the model.
- Confirms the adoption of robust data governance with standards for collecting, storing, and accessing data used in models.
- Ensures data used to develop the models are suitable for the intended use including avoiding unwanted bias.
- FIs should have clear standards for performance.
- Fis should ensure that models are deployed in an environment with quality and change control processes.
- FIs must decide whether to approve the use of models. Model approval processes should occur throughout the model lifecycle.
- FIs must understand and manage risks throughout the model lifecycle. Guideline E-23 requires that model risk not be a one-time event before a model is deployed. FIs (and their service providers) therefore must have an adequate governance process to manage and control model risk over the entire model lifecycle.
- FIs should have defined standards for model monitoring to ensure models remain fit-for-purpose and to detect performance issues or breaches. The standards for model monitoring include "implementing processes for handling AI/ML's unique challenges, such as autonomous decision making, autonomous re-parametrization, and the elevated potential for model drift,"
Concluding thoughts
Guideline E-23 marks a significant evolution in OSFI's approach to risk governance by recognizing that model risk now permeates virtually every aspect of modern financial institutions — from credit and capital management to cybersecurity, privacy compliance, and cloud operations. By adopting a deliberately broad and technology-neutral definition of "model," OSFI has effectively extended model risk management obligations deep into third-party ecosystems and IT service arrangements. For FIs, this means rethinking how models are identified, governed, and overseen across the enterprise. For service providers, it means increased scrutiny, transparency, and contractual obligations tied to how analytical and AI-driven tools are developed and operated. As AI and model-driven services continue to proliferate, Guideline E-23 will play a central role in shaping both regulatory compliance and commercial relationships across the financial services technology landscape.
This article was first published on barrysookman.com.
To view the original article click here.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.