In a final report, IOSCO issued guidance to assist regulators in their supervision of market intermediaries and asset managers' use of artificial intelligence ("AI") and machine learning ("ML").

IOSCO stated that, although the use of AI and ML can increase efficiency, their use also increases risks to the markets and consumers. IOSCO encouraged regulators to require market intermediaries and asset managers that use AI and ML to do the following:

  1. ensure senior management oversees the development and controls of AI and ML, including a documented internal governance framework for accountability;
  2. repeatedly validate the results of their uses of AI and ML to confirm (i) expected behavior in stressed and unstressed market conditions and (ii) compliance with regulatory obligations;
  3. have the expertise necessary to understand and challenge the produced algorithms, and to conduct due diligence;
  4. have a service level agreement that sets the scope of the outsourced functions with clear performance indicators, and rights and remedies for poor performance;
  5. disclose meaningful information as to their AI and ML use (and regulators should determine the information they need from firms for appropriate oversight); and
  6. have controls in place to ensure that the data on which AI and ML is dependent prevents biases and otherwise considers ethical aspects of the use of the technology, such as privacy, accountability, explainability and auditability.

IOSCO noted that members and firms should "consider the proportionality of any response" when seeking to implement such measures, adding that the regulatory framework may need to "evolve in tandem to address the associated emerging risks."


The discussion of "biases" is an important read for firms seeking to implement AI-based decision-making. The intellectual difficulty is that a "bias" would seem to suggest a false outcome (e.g., data integrity or performance issues for the AI or ML application); but in this context the definition also includes an outcome that is viewed as not socially desirable.

If the term "bias" includes outcomes that are not socially desirable, a firm may not be able to correct "bias" by improving its technology. A more useful approach is potentially to compare the number or percentage of "socially desirable" outcomes using AI with the comparable number using human decision-making. If the AI can produce more socially desirable results as compared to human decision-making, regulators are likely to view that outcome more favorably than an outcome that is able to demonstrate higher predictive accuracy but achieves such accuracy through decisions that are not viewed as socially desirable.

Primary Sources

  1. IOSCO Press Release: IOSCO Publishes Guidance for Intermediaries and Asset Managers Using Artificial Intelligence and Machine Learning
  2. IOSCO Final Report: The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.