ARTICLE
9 January 2025

Compliance Considerations In AI

M
Macfarlanes LLP

Contributor

We are a London-based law firm, built and shaped around the needs of our clients. Our blend of expertise, agility and culture means we have the flexibility to meet our clients’ most challenging demands and to champion innovation. We operate in three broad areas: assisting clients with their major transactions, from complex M&A and real estate transactions to the creation of sophisticated financial products; aiding our clients with their most consequential litigation and investigations; and advising on all aspects of our clients’ private capital needs, working with asset managers, family offices and individual entrepreneurs. The scope of our services is distinct, and we are a foremost firm in each of these areas.
The UK financial services sector must prepare for increased AI oversight, aligning with principles like safety and transparency. Firms should develop AI strategies, ensure compliance with regulations, and consider cross-border rules as AI use expands...
United Kingdom Finance and Banking

Though the United Kingdom has no purely AI-focused regulatory framework, financial services firms should prepare themselves for heightened supervision over their deployment of artificial intelligence and ensure compliance with a multitude of diverse and broadly-focused requirements. Alexandra Green and Michael Sholem set out how to approach this task.

The use of AI by financial services firms is not new but the use cases for it have grown significantly in the past few years particularly given the advent of generative AI. Firms are fast recognising that AI can be deployed at all levels of their organisation enhancing efficiencies and optimising the customer experience. In expectation that use cases will continue to evolve, the Financial Conduct Authority has recently launched a number of AIfocused initiatives, including November's AI Input Zone. The FCA is using the AI Input Zone (a development of the FCA's AI Lab) to gather stakeholders' views about current and future uses of AI along with the requisite financial services regulatory framework. In light of these developments, financial services firms should prepare themselves for, if not additional regulation, at least heightened supervision over their deployment of AI.

So far, the FCA has diverged from the European Union in its approach to AI regulation. On 1 August 2024, a new EU Regulation, the Artificial Intelligence Act (the 'AI Act'), came into force across all 27 member states, with the majority of the provisions of the AI Act applying from 2 August 2026. The AI Act provides a detailed and comprehensive legal framework focused specifically on AI across all sectors of the economy. In contrast to the EU's more prescriptive stance, the UK Government has adopted a different approach that it believes will avoid the risk of stifling innovation in this area. The United Kingdom has taken a high level principles based and outcomes focused approach to AI regulation which is technology-agnostic. The UK Government's approach is based in particular on five principles (the '5 Principles') as follows: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.

The FCA set out in its April 2024 Artificial Intelligence Update (the 'AI Update') its approach to AI regulation building on the Government's 5 Principles. The FCA explained how its existing regulatory framework and regulatory standards aligned with the 5 Principles. In particular, the AI Update drew firms' attention to relevant existing regulatory requirements that support each of the 5 Principles. The FCA also said in the AI Update that it will closely monitor the adoption of AI across UK financial markets and keep under review the option of adapting the regulatory regime in the future. This monitoring is clearly well underway as shown by the recent initiatives from the AI Lab and the evolving use cases for AI. Pending any further developments, in the absence of a purely AIfocused regulatory framework, firms must ensure their use of AI is compliant with a multitude of diverse and broadly-focused regulatory requirements, including the FCA's Principles for Businesses. We focus in this article on how Compliance teams should approach this task.

A holistic approach

Given the breadth of regulation that applies to firms when adopting AI, firms will have to take a holistic approach to AI compliance. Compliance teams will need to map out all applicable aspects of the regulatory framework and understand how these are engaged by each use-case of AI. In particular, Compliance should prioritise those areas of regulation highlighted by the FCA in the AI Update (such as considering the FCA's Principles for Businesses, the Consumer Duty, SYSC requirements, the Equality Act 2010 and the UK GDPR).

However, Compliance teams should not have to start from scratch. They ought to be able to take the FCA's approach of building on existing regulatory frameworks. While firms will need to develop a specific AI strategy, they ought to have many processes already in place to support that strategy. These processes should be capable of being adapted to provide for AI risks.

Developing an AI strategy

Regulated firms will need to implement and keep under review an AI strategy ensuring that all uses of AI are identified, managed and monitored in accordance with their firm's stated risk appetite. The AI strategy should set out the firm's AI capabilities and the firm's approach to AI, covering the following areas in particular:

  • The firm's use of AI: This should detail all uses of AI by the firm and any restrictions on AI by the firm and its staff. Firms will need to be able to explain their rationale for using particular AI systems along with how this aligns with the firm's strategy and can bring about better outcomes for the firm's customers. Firms will also need to assess any third-party exposure to AI and review their counterparty arrangements to ensure that they understand and accept such uses.
  • AI governance process: A process should be implemented for approving new use-cases for AI as well as monitoring and reviewing existing AI uses within the firm. A documented approval process should be maintained that requires approval of each use of AI and any significant changes to that use. The approval process should identify:
    • the AI process to be implemented detailing the AI product and any adaptations to the product;
    • the AI provider and diligence conducted on that provider;
    • the individuals/team responsible for engaging the AI provider;
    • the issue that the AI is providing a solution to;
    • how the use case fits in with the firm's AI strategy;
    • any trials and initial testing to be conducted;
    • any additional resourcing and training required for the AI use for staff;
    • all risks of using the AI for the use case;
    • the processes and mitigating controls which will be implemented to manage identified risks and ensure that the AI is used in compliance with regulatory requirements;
    • how the use of AI will be disclosed to customers to ensure transparency;
    • how often the AI use should be reviewed to ensure that its use aligns with the firm's AI strategy; and
    • the individual and relevant committee responsible for approving the use of the AI.
  • AI risk management: A thorough assessment of all the risks for each use case will need to be considered (including third party use). The risks posed will vary considerably depending on the relevant use case and the AI employed. Risk management teams will have to be engaged from the outset of any AI use-case to ensure there is an appropriately tailored risk management solution.

    Firms will need to demonstrate that they have robust systems in place with a clear process for testing and monitoring. This means they will have to develop risk management tools which address both the anticipated day to day risks that arise when the AI is working as expected as well as planning for other unexpected but possible risks. For example, where AI is used for investment decision-making, firms should understand and expect that AI outputs may exacerbate biases in data and ensure that their processes mitigate against this. In the retail sphere, firms will need to demonstrate in particular how their processes provide for good customer outcomes and are Consumer Duty compliant.

    Firms must ensure that their use of AI does not compromise their resiliency and that their business continuity arrangements provide for any outages or service issues in their AI systems. They will also need to implement measures to protect against the misuse or loss of data in addition to safeguard the firm's privacy. Firms should specify their risk tolerances and as far as possible apply quantitative thresholds, for example, in respect of data privacy issues, biased results or service interruptions. Particular care must be taken where firms are relying on third-party providers. Firms will need to assess if the AI use constitutes a critical or important outsourcing for the purposes of SYSC 8 requirements or a material outsourcing that requires disclosure to the FCA.
  • Senior Manager responsibility: Firms should ensure that it is clear which Senior Manager(s) are responsible for the firm's use of AI. In order for Senior Managers to exercise their responsibility they must have sufficient expertise to understand AI and access to appropriate Management Information. Firms may need to provide additional training for Senior Managers or consider hiring further resource. Where AI use is a key part of a firm's processes and/or the provision of services to clients, firms should think about whether to amend the relevant Senior Manager's Statement of Responsibility to include AI risk management
  • AI resource: Firms adopting AI tools must have sufficient expertise at all levels of their business. It will be important for firms to understand any innate limitations of each AI use-case to manage the associated risks. This will be particularly relevant for firms using Large Language Models (LLMs), for example, for automating compliance monitoring processes or to review legislation and assess if a firm's practices are regulatory compliant. Firms will need to factor into their use of AI outputs and their risk management framework, that due to how the LLMs work, the outputs will not always be 100 per cent accurate. Outputs known as 'hallucinations' that appear plausible but are in fact incorrect are a particular risk.
  • Cross-border services: Where a firm is part of a multinational group, or where it provides products and services to customers outside of the UK, it should consider whether rules or guidance relating to AI from other jurisdictions should apply to its operations in the UK. For example, the implementation of the EU's Digital Operational Resilience Act1 and the AI Act should be taken into account by any UK firm with a material connection to EU-regulated firms, or firms with a large number of EU customers. It may be necessary for such firms to put in place policies and procedures that comply with both UK and EU rules in this area.

Final thoughts

AI has the potential to revolutionise the financial services industry by providing tools and solutions that enhance efficiency and improve customer outcomes. While the FCA is currently in listening and learning mode, it has made its expectations of firms clear. Ultimately, whatever approach a firm takes to AI use, it will be for Compliance to ensure that this approach can both be explained and justified. In the future, given the pace of advancements in this field, it may be the case that firms will additionally need to justify their approach if they are not using AI.

Footnote

1 Regulation (EU) 2022/2554.

Originally published by www.compliancemonitor.com and www.i-law.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More