ARTICLE
13 April 2026

Adopting AI In Financial Services: Key Takeaways From The Second Financial Industry Forum On AI

F
Fasken

Contributor

Fasken is a leading international law firm with more than 700 lawyers and 10 offices on four continents. Clients rely on us for practical, innovative and cost-effective legal services. We solve the most complex business and litigation challenges, providing exceptional value and putting clients at the centre of all we do. For additional information, please visit the Firm’s website at fasken.com.
Artificial intelligence (AI) is increasingly integrated into Canada’s financial services sector and is affecting operational processes, product delivery, risk management, and market dynamics.
Canada Finance and Banking
Koker Christensen’s articles from Fasken are most popular:
  • with Finance and Tax Executives
  • with readers working within the Banking & Credit and Healthcare industries

Artificial intelligence (AI) is increasingly integrated into Canada’s financial services sector and is affecting operational processes, product delivery, risk management, and market dynamics.

Over the course of four workshops in 2025, the Financial Industry Forum on Artificial Intelligence (FIFAI) II convened by the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute with participation from other regulators and industry stakeholders, examined the evolving implications of AI adoption for financial institutions, consumers, and the broader financial ecosystem. Published on March 23, 2026, the FIFAI II report (the "Report") introduces the AGILE framework (Awareness, Guardrails, Innovation, Learning, and Ecosystem Resiliency) as an organizing structure for managing AI‑related risks and capitalizing on AI-related opportunities in the financial services sector.

The Report reflects views and insights from individual FIFAI II speakers and participants, and should not be interpreted as guidance from regulatory authorities.

Key AI Risk Themes

The Report identifies a number of risks associated with increased AI deployment in the financial sector.

Strategic Risks

Strategic challenges associated with AI adoption in financial institutions include variability in pace of adoption, fragmented AI approaches, and constraints related to resources and data quality. The Report further notes that financial institutions operate within a multi‑jurisdictional regulatory environment with limited AI‑specific guidance, which has contributed to uncertainty regarding the application of existing requirements to AI‑enabled activities.

Security and Cybersecurity Threats

Financial institutions are experiencing an increase in AI‑enabled threats, including social engineering, deepfakes, voice spoofing, synthetic identity fraud, and automated cyberattacks. Disinformation and misinformation campaigns could disseminate false or misleading claims about, for example, financial institutions’ solvency or regulatory compliance, and thereby undermine trust.

Consumer Risks

Consumer‑related risks associated with AI use exist in areas such as credit adjudication, underwriting, product recommendations, and investment services. As AI‑enabled systems increasingly influence consumer outcomes, issues relating to transparency, explainability, accountability, and disclosure have become more prominent.

Additional risks include the potential for biased or unfair outcomes arising from data limitations, increased exposure of consumers to AI‑enabled fraud, and challenges for consumers in identifying when AI systems are used. The Report highlights that these risks may have disproportionate effects on certain populations, including seniors, newcomers, low‑income individuals, and persons with limited digital access or literacy.

Knowledge and Talent Gaps

AI’s growing role in financial services is straining access to skilled talent. Expertise is concentrated in larger organizations, leaving smaller firms and oversight bodies more exposed. Meanwhile, lagging AI education and rapid technological change increase operational and consumer risks. Inadequate workforce upskilling and limited consumer understanding can reduce AI’s benefits and heighten exposure to fraud. 

Third‑Party Concentration and Supply Chain Risks

Financial institutions increasingly rely on external providers for data, models, cloud infrastructure, and AI‑enabled services, often through complex multi‑tier supply chains. Risks include limited visibility into third-party controls and practices, outsized dependence on a small group of external vendors, and limited contractual leverage.

Financial Stability Risks

These risks include the potential amplification of operational disruptions, correlated behaviour among AI‑driven trading models trained on similar data, and broader labour and macroeconomic effects linked to automation and business transformation.

The AGILE Framework

The AGILE framework introduced in the Report is meant to help guide responsible AI adoption, innovation and resilience across Canada’s financial sector. As set forth below, the Report assigns implementation priorities with respect to each portion of the framework to navigate AI risks and leverage AI opportunities.

Awareness

Stay ahead of AI-driven risks by understanding how technologies reshape the risk landscape through organizational enhancements such as AI oversight, board engagement, and expanded monitoring and stress testing scenarios.

Immediate priorities: Strengthen executive awareness by ensuring boards and senior leaders actively understand evolving AI risks and proactively prepare for emerging technologies like agentic AI through clear governance frameworks and adaptive controls.

Short/medium term: Expand stress testing and monitoring by incorporating AI-driven macroeconomic scenarios into enterprise risk frameworks and continuously tracking labour market and economic impacts to anticipate systemic vulnerabilities and inform strategies.

Guardrails

Make best practice regular practice with strong controls, data-integrity standards, human oversight for high-impact decisions, transparency and appropriate consumer outcomes, and rigorous third-party oversight.

Immediate priorities: Reinvigorate focus on the fundamentals by making best practice regular practice with strong governance and risk controls that work as intended and building muscle memory in areas such as cyber hygiene and third-party due diligence.

Short/medium term: Drive evergreen governance and transparency by maintaining adaptable frameworks, enforcing strong data integrity standards, and delivering consumer-centric disclosures with explainable AI decisions and inclusion by design.

Innovation

Adopt an AI growth mindset that treats AI as a driver of competitiveness that enhances consumer financial well-being and protection, supported by bold investments in talent, modern infrastructure and responsible innovation.

Immediate priorities: Enable bold, responsible AI-driven innovation by encouraging experimentation and scaled adoption of AI in customer service, market operations, internal processes and security focused uses of AI, supported by appropriate safeguards, sandboxes, and outcome-based supervision that allows new products, services, and business models to emerge.

Short/medium term: Accelerate AI-driven transformation by investing in tools and talent, modernizing legacy systems with standardized data and zero-trust security, and enhancing consumer financial well-being and protection through such things as personalized guidance and proactive fraud detection.

Learning

Build AI skills at every organizational level, including employees and management, through continuous training and collaborative initiatives, while also empowering consumers with AI literacy to help them protect themselves and make informed choices.

Immediate priorities: Establish financial industry AI literacy and upskilling initiatives through continuous learning systems, organizational AI training frameworks, and collaborative industry initiatives that accelerate talent development and consumer awareness.

Short/medium term: Advance sector-wide AI capability by building deep talent pipelines through university partnerships, scaling AI training across the organization, and empowering consumers through transparency and accessible AI literacy so they can help protect themselves against threats, understand AI-driven decisions, and make confident, informed choices.

Ecosystem Resiliency

Fortify system-wide defences through improved third-party oversight, regulatory clarity, enhanced digital identity security, expanded real-time threat sharing, and upgraded incident-response frameworks.

Immediate priorities: Pursue greater regulatory certainty on AI-related risks by beginning to clarify how existing rules apply to AI and aligning across agencies where possible on messaging, priorities and next steps.

Short/medium term: Strengthen information-sharing frameworks and joint intelligence efforts by developing clear legal and privacy frameworks for threat information sharing, standardizing formats, and encouraging participation from institutions of all sizes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More