The use of Artificial Intelligence ("AI") by federally regulated financial institutions ("FRFIs") is rapidly growing, together with the risks associated with its use. On September 24, 2024, the Office of the Superintendent of Financial Institutions ("OSFI") and the Financial Consumer Agency of Canada ("FCAC") released a joint report1 on the uses of and risks associated with AI ("AI Report"). The AI Report summarizes the findings of a 2023 survey of FRFIs seeking feedback on their AI and quantum computing preparedness. The AI Report also outlines the key risks faced by FRFIs in adopting AI and underscores the importance of balancing innovation with sound risk management.
How FRFIs Are Using AI
FRFIs are utilizing AI to achieve operational efficiency, assist with core functions, and to facilitate customer engagement. AI supports FRFIs in a wide range of functions such as underwriting, claims management, detecting fraud, and through online customer support. As the number of institutions adopting AI multiply and its functions broaden, it is increasingly important to identify and manage the risks associated with AI adoption.
Internal and External Risks of AI
The AI Report identifies the following internal risks arising from the adoption and use of AI by FRFIs:
- Data governance: The respondents' top concern was risks relating to data privacy, data governance, and data quality, especially when data is located in different jurisdictions and when data ownership is fragmented.
- Model risk and explainability: The complexity of AI models often poses a challenge to the explainability of its outputs. FRFIs must involve stakeholders in the design and implementation of AI models and aim to explain to stakeholders how the models make decisions. Explainability is particularly challenging for deep learning models such as generative AI (i.e. ChatGPT).
- Legal, ethical, and reputational: AI models can pose legal and reputational risks to FRFIs due to the ever-changing legal landscape, privacy concerns, customer consent concerns, and bias inherent to AI models.
- Third-party: Heavy reliance on third-party AI models poses a concentration risk in the event of an outage. FRFIs are ultimately responsible for the AI models that they employ, and must ensure that these models and the third-parties who supply them are compliant with the FRFIs' internal standards.
- Operational and cybersecurity: AI helps FRFIs achieve operational efficiency while posing increased risks to critical functions through threats of cyber attacks, data breaches, and financial risks from model malfunctions.
The AI Report also identifies the below risks that are external to FRFIs, often arising due to external threat actors and market pressures.
- Cyber and fraud threats: FRFIs have become significantly more vulnerable to cyber attacks as the adoption of AI increases, with attacks quadrupling in recent years. Generative AI is increasingly utilized by threat actors to commit fraud and theft. FRFIs are vulnerable to these attacks through email phishing/smishing scams, identity theft using deepfakes and voice spoofing, and malware attacks targeting core systems and critical technologies.
- Business and financial: FRFIs may face increasing pressure to adopt AI if it continues to become a staple in the industry. If FRFIs lag in adopting AI, they may face financial and competitive pressure as other institutions adopt more efficient and autonomous systems.
- Credit, market, and liquidity: AI's increased adoption may increase retail credit risk. Corporate exposures could also result in credit losses if AI disruption creates industry winners and losers. Market and liquidity risks may rise due to AI-driven herding behavior and flash crashes.
Challenges and Opportunities of AI
The rapid adoption of AI in FRFIs is outpacing risk management practices, posing significant governance challenges. The novelty of AI requires swift and vigilant risk management, especially in business lines not traditionally subject to model risk governance. Gaps in AI risk management oversight can arise if FRFIs only address AI risks within individual frameworks, rather than through comprehensive, multidisciplinary approaches.
Ongoing risk assessment is needed to identify initial, ongoing, and novel risks. The lack of contingency actions or safeguards, such as human-in-the-loop controls and performance monitoring, increases the risk of AI models behaving unexpectedly. Additionally, the use of generative AI models amplifies risks such as explainability, bias, and third-party dependencies, necessitating specific controls and employee education, as insufficient AI training for employees can lead to poor decision-making.
With proper risk management, FRFIs stand to benefit from the efficiency offered by AI systems. FRFIs should aim to be transparent with customers about their use of AI, the shortcomings of adopted models, and should be prepared to explain how the AI model reaches its conclusions.
OSFI's Next Steps
In furtherance of the regulators' aim to keep pace with the impacts of AI on FRFIs, OSFI announced plans for a second session of the financial industry forum on AI ("FIFAI"). Discussions in the first FIFAI session led to OSFI's 2023 publication on responsible AI, which identifies four areas of importance to AI models (the EDGE Principles; see our summary of this report here). OSFI acknowledges, however, that the 2023 report may not have sufficiently addressed the challenges and trade-offs incumbent to AI. OSFI aims to build on this work in future FIFAI sessions.
Footnote
1 Office of the Superintendent of Financial Institutions, OSFI-FCAC Risk Report – AI Uses and Risks at Federally Regulated Financial Institutions (24 September 2024).
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© McMillan LLP 2024