On October 10, 2023, the Ontario Securities Commission (OSC) and EY published a report titled "Artificial Intelligence in Capital Markets: Exploring Use Cases in Ontario" [PDF] (the Report), which examines the current state of artificial intelligence (AI) in capital markets. The purpose of the Report is to understand how market participants are using AI and to consider how the OSC can best support the adoption of AI in Ontario's capital markets through oversight and regulation.

In addition to discussing the benefits that AI can offer to capital markets, the Report recognizes AI's potential disruptive impact and raises questions surrounding management of risks, including potential malicious uses of AI systems.

AI use in Ontario's capital markets

According to the Report's findings, market participants are currently using AI to enhance existing products in order to make processes more efficient and accurate, rather than to create new products entirely. These enhancements include trade order execution and surveillance, high-frequency trading, hedging, futures market analysis and sales and marketing.

The Report highlights that the "most mature use of AI in capital markets" is focused on three principal areas:

  1. improving operational efficiency and accuracy
  2. assisting with trade surveillance and detection of market manipulation
  3. supporting advisory and customer services through the use of automated processes

For example, AI has been used for risk management functions such as anti-money laundering, trade surveillance and verification of information and identification of individuals in support of the onboarding process of potential clients. Market participants are also leveraging AI to provide insights into financial operations, including the automation of trade reconciliation.

While AI plays some role in asset allocation and risk management, the use of AI in these areas is still burgeoning in Canada. Large hedge funds are using AI for research, economic analysis and order execution, but the Report identifies that AI's role in trading otherwise remains limited.

Risks to market participants

As the use of advanced and sophisticated AI systems becomes more popular among firms, market participants face new risks and challenges. For example, firms must grapple with the development or procurement of AI systems including data constraints, accessing skilled labour and corporate culture, while competing with tech vendors for skilled AI professionals. For market participants to fully benefit from AI, key governance issues also need to be addressed internally, including data quality, privacy, fairness and interpretability. The Report recommends doing so through the implementation of a robust, AI-specific governance framework.

The Report also warns market participants of the potential for malicious use of AI systems by unscrupulous actors. AI models, for example, are vulnerable to data poisoning, input attacks and cybersecurity breaches. Emerging technology can also be exploited to generate more sophisticated phishing messages and emails, or to present opportunities for malicious actors to impersonate individuals or organizations. The Report recommends that market participants implement appropriate safeguards to maintain the continued stability of the financial system.

The regulation of AI

There are currently no formal rules for market participants with respect to the use of AI technologies in the Canadian capital markets. In light of the absence of specifically articulated authorized regulatory oversight in Canada thus far, the approach taken in the United States may be instructive as to what may lie ahead.

South of the border, the Securities and Exchange Commission (SEC) recently proposed new rules related to the use of AI in the capital markets (the Proposed Rules). The Proposed Rules aim to address risks to investors from conflicts of interest associated with the use of predictive data analytics by broker-dealers and investment advisors. The Proposed Rules would require broker-dealers to evaluate whether use of certain technologies (such as those that optimize, predict, guide, forecast or direct investment-related behaviors) in investor interactions engage a conflict of interest that results in the firm's interests being placed ahead of investors' interests. Firms would be required to eliminate or neutralize the effect of any such conflicts. The Proposed Rules would also require firms to have written policies and procedures designed to achieve compliance and to keep books and records related to these requirements.

Canadian regulators are clearly keen to address technological advancements. Along with publishing the Report, members of the Canadian Securities Administrators are looking at this issue closely and have slowly taken steps to recognize the growing pervasiveness AI in their oversight processes. For example, in July 2021, the OSC, the Alberta Securities Commission and the British Columbia Securities Commission jointly announced the selection of Bedrock AI to support the Cross-Border Testing initiative — a project involving 23 regulators across five continents. Bedrock AI uses deep machine learning to process disclosure from public issuers, enhance regulators' supervisory processes and assist businesses with corporate risk analysis. The initiative provided Bedrock AI with the opportunity to test and scale products in multiple jurisdictions. This marked an important step by the securities regulators towards broader adoption of AI in their oversight processes.

Other Canadian regulators, such as the Canadian Investment Regulatory Organization (CIRO), are also anticipating the widespread development of AI technology and surveillance tools by regulators and market participants alike. The Investment Industry Regulatory Organization, the predecessor to CIRO, published a study in 2019 titled "Enabling the Evolution of Advice in Canada" [PDF], which raised concerns from firms regarding investment in new compliance-related technologies, like AI and associated regulations.

Conclusion

As the use of AI becomes the new norm amongst capital markets participants, so too does the risk that some actors will exploit AI for malicious purposes. Regulators like the OSC are continuing to consider the impact of AI on investors and other market participants to become better equipped to assess and manage AI-related risks, while also supporting its responsible deployment. Regulators will have to consider the appropriate balance between allowing market participants to benefit from the capabilities that AI systems have to offer alongside the necessity of implementing appropriate risk mitigation procedures to protect investors from bad actors and tech-specific vulnerabilities.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.