In March 2023, the Department for Science, Innovation and Technology published a white paper detailing its proposed approach toregulating artificial intelligence (AI) in the UK. It takes a different approach to the EU with a non-statutory principles-based frameworkto be applied by regulators in each sector. By contrast, the EU AI Act will, once finalised, be prescriptive legislation applying acrosssectors.

The UK's aim is to foster innovation, driving growth and prosperity, while also enabling consumer trust and strengthening the UK'sposition as "a global leader in AI". It will do this by setting the framework of principles which sector specific regulators will applyproportionately and sympathetically to their respective regulatory requirements.

Due to the extraordinary pace of change in the sector, this non-statutory flexible approach is intended to be built up through guidancethat can be issued and updated easily.

Central government will coordinate and ensure consistency of approach between regulators by giving guidance on how to applythe principles, including as to the risks posed in particular contexts and what measures should be applied to mitigate them, and bymonitoring and evaluating the effectiveness of the framework.

Regulators must provide guidance as to how the principles interact with existing legislation, illustrate what compliance looks like, andproduce joint guidance with other regulators where appropriate.

The UK framework will define AI by reference to two characteristics:

  • Adaptivity - the fact that training allows the AI to infer patterns often not easily discernible to humans or envisioned by the AI'sprogrammers;
  • Autonomy - some AI systems can make decisions without the express intent or ongoing control of a human.

The intention is to regulate high risk uses rather than all applications with these characteristics. The white paper does not pre-definethese so we must wait to see whether they coincide with those which will be most affected in the EU, such as credit scoring and life andhealth insurance underwriting.

Below are the principles and some initial thoughts on themes one might expect see in future guidance from the financial servicesregulators.

Safety, security and robustness

AI systems should function in a secure and safe way throughout the lifecycle and risks should be continually identified, assessed andmanaged. IT, data and cyber security are base level expectations for the financial services regulators. This principle also echoes theirfocus on risk assessment and operational resilience, both of which are essential to the safety of both users of financial services and thefinancial system itself.

Firms might expect guidance on how to adapt these arrangements to the new combination of risks presented by AI.

Appropriate transparency and explainability

Transparency refers to the communication of information about an AI system and explainability refers to the extent to which it ispossible for relevant parties to access, interpret and understand the decision-making process of the AI. The paper acknowledges thatan appropriate degree of transparency and explainability should be proportionate to the risks presented by the AI system.

In financial services, this might depend on factors such as the type of decision and the customer concerned. These conceptsalso reflect existing rules and principles including the new Consumer Duty outcome requiring firms to ensure customers receivecommunications they can understand. This could be particularly challenging where firms use AI.

Fairness

The paper recognises that the concept of fairness is embedded across many areas of law and regulation and invites actors involvedin the AI lifecycle to consider the definitions that are relevant to them. It anticipates that regulators may need to develop and publishdescriptions and illustrations of fairness that apply within their regulatory remit together with technical standards.

The Financial Conduct Authority (FCA) should have a head start here given that several principles for business are based on fairnessand the Consumer Duty requires firms to apply the concept even more broadly, including in relation to fair value.

Accountability and governance

This is about effective oversight of the supply and use of AI systems with clear lines of accountability. Regulator guidance should reflectthat accountability means individuals and organisations adopting appropriate measures to ensure the proper functioning of AI systemswith which they interact. Financial services firms in the UK have done much work on this area over the last few years as they haveimplemented the senior managers and certification regime.

This would naturally accommodate AI as it does other initiatives, but the Prudential Regulation Authority (PRA) and FCA will want tosee firms putting in place wider governance arrangements around all stages in the AI lifecycle. In fact, robust governance is likely to beone of the key means through which firms can give the regulators comfort about their evolving use of AI.

Contestability and redress

The paper provides that users, affected third parties and actors in the AI lifecycle should be able to contest a decision that createsa material risk of harm. Regulators are expected to encourage and guide regulated entities to make routes easily available andaccessible. It will be interesting to see how the regulators approach this.

Customers of financial services have personal rights of action against firms in relatively limited circumstances, with the regulatorsmore often acting against firms for wider systems and controls weaknesses. Customers of certain services may also be entitled tocompensation in the event of a firm failure and this does not depend on the technology used to deliver the service.

Unlike the EU AI Act, the UK plan does not include penalties at this stage.

The white paper is open for consultation until June 21. In the next six months, government is due to issue the principles and initialimplementation guidance to regulators, and in the next 12 months the key regulators will publish guidance on how the principles willapply within their areas.

Given their resonance with existing regulatory expectations, firms developing their use of AI should be working within the spirit ofthese principles even now, but keeping abreast of specific guidance to follow from not only the PRA and FCA, but also other relevantregulators.

This article was originally published by Thomson Reuters Regulatory Intelligence.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.