Artificial intelligence (AI), with its power to process large volumes of data, can provide more personalisation of financial products and services for consumers at greater scale and efficiency, and at lower cost. It can, in principle, enable firms to provide better support for vulnerable customers (for example, to consumers without standard credit histories, or through the use of simplified advice). All this could help to advance the FCA's desired outcome of increased access through greater financial inclusion.1 This In Practice article considers the recent Discussion Paper to foster debate on this issue.

Through their recent discussion paper (DP) DP5/22, Artificial Intelligence and Machine Learning,2 the Bank of England, PRA and the FCA hope to encourage a broad-based and structured discussion with stakeholders on the benefits and the challenges associated with the use and regulation of AI.

The DP considers the application of legal requirements and guidance to the use of AI in UK financial services to support consumer protection, competition, the safety and soundness of individual firms, market integrity, and financial stability. It is accompanied by appendices reviewing domestic and international developments relating to AI, the current regimes governing data and model risk management, as well as a list of selected relevant publications, and a table summarising how global regulatory publications deal with various challenges.

The DP aims to clarify how the existing regulatory framework applies to AI and identify gaps in the regulatory framework. Of particular concern are the additional challenges and risks that AI may pose for firms' decision-making and governance processes: the regulators are contemplating whether these might be addressed through the Senior Managers and Certification Regime and other existing regulatory tools.

The existing tools will include the rules introduced under the new Consumer Duty regime, which aim to raise standards for firms who deal with, or can determine or materially influence outcomes for, retail consumers. These come into force:

  • for new and existing products and services that are open to sale or renewal, on 31 July 2023; and
  • for "closed" products and services, on 31 July 2024.

Those currently working on implementation of the new Consumer Duty regime will read the DP thoughtfully as it provides some insight into the regulators' thinking, particularly around how AI may impact the four Consumer Duty outcomes:

  • products and services;
  • price and value;
  • consumer understanding; and
  • consumer support.

In the context of the design of products and services, the Consumer Duty (which makes no mention of AI) seeks to ensure the fair treatment of customers, taking into account their diverse needs.

Whilst recognising that AI can be used to identify demographics with specific needs, or characteristics of vulnerability, and provide better product matching for consumers, the FCA highlights in the DP some risks that the use of AI may pose, including the risk:

  • of exploitation of behavioural biases, consumer inertia and other vulnerabilities (eg the recent FCA warning3 about gamelike elements in stock trading apps);
  • that biases in the data underlying AI decisions may give rise to discriminatory decisions (firms identify this as one of the top risks4); and
  • that the use of AI applications may result in the exclusion of particular groups.

Where AI applications are deployed, careful definition of target markets will be critical, as will ensuring that the product or service (and the AI applications that support these) will meet the needs and objectives of that target market and will have appropriate regard to the nature and scale of the characteristics of vulnerability within that target market. Firms should be aware that the FCA intends to monitor final FOS decisions on complaints about inappropriate product or service sales, as well as data from supervision and authorisation activities including management information and complaints data; it is also contemplating the development of additional metrics.

The new Consumer Duty is likely to have particular relevance in the context of differential pricing by groups of consumers, given the consumer protection risks that may arise from potential AI bias. Firms must ensure the price is reasonable relative to the expected benefits: they will need to be able to justify prices offered through the use of AI to different groups, which will again involve taking into account their differing needs and protected characteristics under the Equality Act, or their characteristics of vulnerability. The FCA expects that firms will monitor customer outcomes, and be able to identify, explain and justify where their AI models result in differing price or value for different cohorts of customers. The FCA will also undertake its own monitoring in this regard. Where a firm identifies that a product is no longer delivering fair value, the Consumer Duty regime requires it to take action to mitigate or remediate harm to existing customers, and prevent harm to new customers.5

The Information Commissioner's Office considers that an AI-assisted decision made about someone without some form of explanation is unlikely to be fair, as it may limit their autonomy and scope for self-determination.6 An AI decision that results in an outcome such as denial of a life insurance policy, a refusal of credit or of a mortgage may adversely impact the customer. Meeting the consumer support and consumer understanding outcomes is likely to be more challenging for firms trying to explain how data may be used in AI decision-making, or to interpret how a particular AI decision was made. This is particularly true where machine learning means that although the firm can see the data inputs and the final decision, it may not be able to explain completely the mechanism by which the AI connects one to the other (the black box issue).

As the FCA has pointed out,7 what borrowers are really likely to find useful is a short summary of what drove the decision in their specific case and what they might do about it: "What are the factors that raised my estimated default risk?"; "What do I need to do to get approved next time?". It is likely that firms will need to offer a real-time human interface to provide effective support to customers. Interestingly, however, the US CFPB's 2020 TechSprint's exploration of the notification of adverse credit actions8 included development of an interactive "approval simulator" powered by machine learning that a consumer could use to see what actions, or combination of actions, would most easily yield a credit approval going forward, as well as the use of chatbot driven engagements: the use of AI to solve AI challenges.

Finally, on governance the DP quotes the IOSCO Guidance9 which advocates requiring firms to have designated senior management responsible for the oversight of AI development, testing, deployment, monitoring, and controls. Whilst recognising that the technical complexity of AI systems make it critical that staff responsible for developing or deploying them are competent to do so, the UK regulators are contemplating creating a Prescribed Responsibility for AI, or possibly a certification function which might be similar to the FCA's existing algorithmic trading certification function.10 The Consumer Duty, on the other hand, requires firms to have a champion at Board level (or equivalent governing body) who, along with the Chair and the CEO, ensures that the Duty is discussed regularly and raised in all relevant discussions. Firms will need to ensure that the AI oversight function can provide the right inputs to ensure that those Board discussions take into account the potential impacts and risks to which the use of AI applications may give rise in the context of the Consumer Duty.

Footnotes

1. FCA's 'Our strategy - 2022 to 2025', https://www.fca.org.uk/publication/corporate/our-strategy-2022-25.pdf

2. https://www.bankofengland.co.uk/prudential-regulation/publication/2022/october/artificial-intelligence

3. https://www.fca.org.uk/publications/research/gaming-trading-how-trading-apps-could-be-engaging-consumers-worse

4. https://www.bankofengland.co.uk/Report/2022/machine-learning-in-uk-financial-services

5. PRIN 2A.4.25 R.

6. 'Explaining decisions made with artificial intelligence', Information Commissioner's Office, at https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/part-1-the-basics-of-explaining-ai/legal-framework/

7. FCA Insight, 'Explaining why the computer says "no"', https://www.fca.org.uk/insight/explaining-why-computer-says-no

8. Consumer Finance Protection Bureau, 'Tech Sprint on Electronic Disclosures of Adverse Action Notices', https://www.consumerfinance.gov/rules-policy/competition-innovation/cfpb-tech-sprints/electronic-disclosures-tech-sprint

9. IOSCO, 'The use of artificial intelligence and machine learning by market intermediaries and asset managers', https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf

10. SYSC 27.8.23R. 764 December 2022 Butterworths Journal of International Banking and Financial Law

Originally published by Butterworths Journal of International Banking and Financial Law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.