ARTICLE
11 February 2025

Navigating CSA Staff Notice 11-348: AI Systems In Canadian Capital Markets

MA
MLT Aikins LLP

Contributor

MLT Aikins LLP is a full-service law firm of more than 300 lawyers with a deep commitment to Western Canada and an understanding of this market’s unique legal and business landscapes.
The Canadian Securities Administrators (CSA)'s Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets...
Canada Finance and Banking

The Canadian Securities Administrators (CSA)'s Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (the Notice) provides guidance on the use of artificial intelligence (AI) systems in capital markets.

Below are key considerations for market participants aiming for compliant AI integration.

1. Overview

The Notice emphasizes that the complexity and risk of AI systems necessitate market participants to create tailored compliance approaches and offers guidance on how to accomplish this effectively.

The Notice applies to the following market participants:

  • Registrants
  • Non-investment fund reporting issuers
  • Marketplaces and marketplace participants
  • Clearing agencies and matching services utilities
  • Trade repositories
  • Designated rating organizations and designated benchmark administrators

The key objectives of the Notice are to safeguard investors and preserve trust in capital markets, foster responsible AI innovation and mitigate risks such as bias, lack of transparency and systemic vulnerabilities.

2. Overarching themes

Although different technologies may require tailored regulatory approaches, the notice emphasizes that securities laws are technology-neutral and principles-based, meaning compliance depends on the activity being conducted rather than the technology used. Across market applications, market participants must do the following:

  • Implement tailored governance and risk management practices for AI systems, including testing, human oversight, AI literacy, cybersecurity measures, data integrity and supply chain evaluation
  • Ensure AI systems are sufficiently explainable to maintain transparency, accountability and compliance. This must balance advanced capabilities with clear, auditable decision-making
  • Provide clear, accurate disclosure of AI system use to ensure investor transparency, mitigate risks and avoid misleading claims or "AI washing"
  • Ensure AI systems do not create conflicts of interest, addressing risks like bias, lack of explainability and flawed code

3. Specific guidance and next steps

One of the most efficient ways of managing AI related risks is developing a clear AI policy.

As a starting point, market participants should also take the following steps with respect to AI:

  • Review what if any AI is used or proposed to be used in their operations
  • Review AI for compliance with applicable laws
  • Implement appropriate policies and procedures with respect to AI
  • Implement appropriate training with respect to AI
  • Implement appropriate communications with respect to use of AI

The Notice outlines specific guidance for different market participants that should be reviewed and incorporated into the above.

Registrants

Registrants must comply with National Instrument 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations, disclose AI system use, implement governance controls, ensure compliance with outsourcing and record-keeping rules, mitigate conflicts of interest and maintain oversight to prevent AI-related biases and risks.

  • Advisers and dealers – Advisers and dealers must ensure AI systems used for trade execution, KYC, client support, decision-making and portfolio management comply with securities laws, maintain explainability and preserve human oversight.
  • Investment fund managers – Investment fund managers (IFMs) must ensure AI system use includes providing transparent disclosures, assessing specific risks and avoiding misleading sales communications. They must also manage conflicts of interest and obtain approvals for material changes in investment strategies involving AI.

Non-investment fund reporting issuers (non-IF issuers)

Non-IF Issuers must comply with disclosure requirements under National Instrument 51-102 Continuous Disclosure Obligations. They must ensure investors have accurate, timely and material information about their use or development of AI systems.

Disclosures should be tailored and non-boilerplate and address AI-related risks, governance, business impact and forward-looking assumptions to avoid misleading statements.

  • Disclosure of current AI systems business use – Detailed non-generic disclosures on their non-IF issuers' AI use must be provided. They must cover definitions, applications, risks, financial impact, contracts, competitive positioning and data sources to provide investors with transparency.
  • AI-related risk factors – Clear, entity-specific disclosures on AI-related risks must also be provided. These include operational, third-party, ethical, regulatory, competitive and cybersecurity risks.
  • Promotional statements about AI-related use – AI-related disclosures must be fair, balanced and substantiated. They must avoid exaggerated or misleading claims while equally presenting both benefits and risks.
  • AI and forward-looking information (FLI) – Non-IF issuers must ensure forward-looking AI disclosures are reasonable and clearly identified and include material assumptions, risks and updates. They must avoid misleading projections.

Marketplaces and marketplace participants

In addition to existing comprehensive requirements under securities laws, marketplaces and marketplace participants deploying AI systems must promote comprehensive internal controls, cybersecurity measures and risk management frameworks. Automated order systems (AOS) using AI must adhere to existing regulatory requirements, prevent market abuse and ensure explainability, ongoing oversight and staff training to effectively manage AI-related risks. The Notice emphasizes that AI systems have significantly transformed automated order systems (AOS), enhancing trading efficiency, improving liquidity forecasting, and lowering costs, while also raising critical regulatory and compliance challenges.

Firms using AI-powered AOS must adhere to the same regulations as their non-AI counterparts to uphold market integrity and safeguard investors. AI-driven AOS must comply with regulations related to market manipulation, insider trading and other forms of market abuse, and firms must ensure they have the capability to detect and prevent such issues.

Clearing agencies and matching service utilities

Clearing agencies and matching service utilities must already comply with securities laws covering supervisory controls, risk management, systems design, operational performance and compliance.

When deploying AI systems, recognized clearing agencies must establish and maintain robust internal controls and cyber resilience measures, including information security, change management, problem management and system support, in accordance with National Instrument 24-102 Clearing Agency Requirements. They are required to record any system failures, malfunctions or delays, noting whether any are material. Additionally, clearing agencies must conduct regular system reviews and vulnerability assessments at least annually.

Matching service utilities must meet the applicable requirements under National Instrument 24-101 Institutional Trade Matching and Settlement, which includes the need to perform stress tests and vulnerability assessments and maintain business continuity plans.

Trade repositories and derivatives data reporting

Trade repositories must comply with the CSA's Trade Reporting Rules and ensure accurate, timely and secure derivatives data reporting. They must implement strong internal controls, cyber resilience measures and operational risk management. AI systems used for reporting must be designed to capture all required data accurately and remain accessible to regulators for oversight and analysis.

Designated Rating Organizations

Designated Rating Organizations (DROs) must comply with National Instrument 25-101 Designated Rating Organizations when using AI systems. Rating methodologies must remain rigorous, systematic and transparent. AI-driven credit rating processes must maintain explainability, data quality and adherence to established rating assumptions. DROs should exercise caution when automating aspects of credit ratings and publicly disclose any use of AI systems in their processes.

Designated Benchmark Administrators

Designated Benchmark Administrators (DBAs) using AI systems must comply with Multilateral Instrument 25-102 Designated Benchmarks and Benchmark Administrators. Accordingly, benchmarks must remain accurate, reliable and transparent. AI-driven benchmark determinations must be explainable, verifiable and supported by robust data monitoring, validation and record-keeping. DBAs should exercise caution when automating benchmark processes and publicly disclose any AI system use.

4. Consultation

The CSA has invited stakeholder feedback on the use of AI systems in capital markets by providing consultation questions on their website. This is to assess whether any adjustments to securities laws are necessary in relation to AI systems. The comment period is open until March 31, 2025.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More