In our previous alert, we discussed the emerging trends for regulating artificial intelligence (AI) in financial services and mentioned a joint paper published by the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) on AI and machine learning DP5/22 - Artificial Intelligence and Machine Learning | Bank of England (DP).

On 26 October 2023, the PRA and FCA published the public responses FS2/23 – Artificial Intelligence and Machine Learning | Bank of England to the DP. The FCA will consult on its requirements for critical services providers later in 2024.

Although FS2/23 does not set out any policy proposals, the responses make for useful reading and shine a further light on the developing themes for regulating AI in financial services that we identified in our previous alert.

FS2/23 in summary

  • A regulatory definition of AI would not be useful. The use of alternative, principles-based or risk-based approaches to the definition of AI, with a focus on specific characteristics of AI or risks posed or amplified by AI, is better.
  • As with other evolving technologies, AI capabilities change rapidly. "Live," periodically updated guidance and examples of best practice will be necessary.
  • Ongoing industry engagement is important. Initiatives such as the AI Public-Private Forum could serve as templates for ongoing public-private engagement.
  • The AI regulatory landscape is complex and fragmented; regulatory alignment and coordination between domestic and international regulators would be helpful.
  • Data regulation, in particular, is fragmented, and more alignment would help address data risks, especially those related to fairness, bias, and protected characteristics.
  • A key focus of regulation and supervision should be on consumer outcomes, especially with respect to ensuring fairness and other ethical dimensions.
  • Increasing use of third-party models and data is a concern and an area where more regulatory guidance would be helpful, noting the systemic risks posed by a firm's reliance on certain critical third-party providers (CTPs) and recent changes to UK law. (See our alert Too Important to Fail - Part 2: The Coming Regulation of Providers of Critical Technology Services to UK Financial Institutions.)
  • AI systems can be complex, and a coordinated approach within firms — in particular, closer collaboration between data management and model risk management teams — would be beneficial.
  • The model risk management principles for banks published by the PRA as SS1/23 could be strengthened or clarified to address issues particularly relevant to models with AI characteristics.
  • Existing firm governance structures and regulatory frameworks such as the Senior Managers and Certification Regime (SMCR) are sufficient to address AI risks.

Aligning with emerging trends?

In our previous alert, we shared our thoughts on the emerging regulatory trends for regulating both generative AI and AI more generally. The responses in FS2/23 reinforce these:

  • It is not clear that AI necessarily creates material new risks in the context of financial services, although the rapid rate of technological change may create new risk; it remains too early to tell.
  • Instead, AI may amplify and accelerate the existing financial sector risks, i.e., those connected with financial stability, consumer, and market integrity, which the financial services and markets regime is designed to reduce. FS2/23 highlights the need for guidance to keep pace with change.
  • AI will also have a role in the control by firms of financial sector risks and, indeed, in the FCA's and PRA's regulation of the sector (although questions may arise about the justification for AI-generated administrative decisions and their compliance with statutory and common law principles of good administration).
  • In keeping with the concerns about amplifying and accelerating existing risks, it is appropriate for the FCA and PRA, as current financial sector regulators, to be charged with regulating AI.
  • The FCA's and PRA's role in regulating AI reinforces the need for using and developing existing financial sector regulatory frameworks, enhancing continuity and legal certainty, and making proportionate regulation more likely (although not inevitable).
  • The need for effective governance of AI to ensure that the AI is properly understood, not only by the technology experts who design it but also by the firms who use it — a "know-your-tech" (KYT) duty — and firms can respond effectively to avoid harm materialising from any amplified and accelerated risks. As FS2/23 indicates, the SMCR should accommodate a KYT duty.
  • Staying with the theme of existing frameworks, the rise of the importance of technology and currently unregulated CTPs, noted above and specifically raised in FS2/23, has resulted in an extension of powers for the FCA and PRA under the recently enacted Financial Services and Markets Act 2023 (FSMA 2023), as noted in our related alert and addressed on our dedicated microsite. Providers of generative AI models that are used by many financial institutions — or by a small number of large or important financial institutions — may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces. If there is a provider of generative AI models used by a large number of financial institutions or a small number of large or important financial institutions, that provider may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces. The FCA will consult on its requirements for critical services providers later in 2024.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.