On 6 February 2024, the UK Government published its response to its AI White Paper consultation (for an outline of the White Paper, see our previous article here). In this week's response, the Government outlined a future roadmap for the development and enforcement of AI in the UK.

The response represents the Government's most detailed explanation to date of what it envisions a "pragmatic", "pro-innovation" approach to AI regulation will look like. While no new AI law has been proposed, the Government acknowledges that "future binding measures" may be required to address AI-related harms. The Government's response paper outlines its focal points and enforcement structure with regards to AI governance, and as well as potential binding measures on organisations.

New categorisation of AI systems and AI risks

In contrast to the EU's risk-based categorisation of AI systems in the EU AI Act, the UK Government's regulatory approach distinguishes between:

  • A capability-based categorisation of AI systems, and
  • An outcome-based categorisation of AI risks.

Highly capable general-purpose AI (HiGen AI). This refers to AI systems capable of performing at a high level across a wide variety of tasks. Such capability will be assessed through an evaluation of (i) the amount of compute used to train the model and (ii) how risk may develop from the capabilities of the AI system. The Government did not provide specific computing or capability thresholds, and indicated that additional factors may need to be considered when evaluating which systems fall within the scope of HiGen AI.

The Government will assess the development and use of HiGen AI particularly closely, as flaws intrinsic to such AI systems may impact multiple sectors or use cases due to their wide range of applicability. For example, a HiGen AI system with an error bias that is deployed in healthcare and recruitment may lead to discriminatory outcomes in both services.

The Government says that it will focus regulation on the developers of HiGen AI systems, which may be required to comply with future binding obligations relating to transparency, corporate governance and risk management (particularly at the pre-deployment stage), among other requirements. Other actors across the AI value chain (such as data or cloud hosting providers) may have their responsibilities considered at a later stage.

Non-HiGen AI. The Government identified two additional categories of AI systems, although such AI systems will subject to comparatively less regulatory focus. The Government considers that AI systems which are capable at performing at a high level (albeit for a narrow set of tasks and usually within a specific sector) (Narrow AI) are more likely to be effectively regulated within existing frameworks due to their narrow scope of application.

However, the Government indicated that further intervention may be required if Narrow AI presents potentially dangerous capabilities. The Government remained cautious on AI systems that can complete tasks over long timeframes, or tasks involving multiple steps, and that are capable of using tools such as the internet or narrow AI systems to complete such tasks (Agentic AI), and noted that further consideration may be required as the risks around Agentic AI emerge.

The UK AI risk taxonomy. The Government outlined three broad categories of AI risks:

  • Societal harms (including harms such as bias and discrimination, unfair outcomes, inaccurate and unsafe AI-generated content);
  • Misuse risks (e.g., risks arising from the use of AI for illegal purposes such as electoral interference, cyberattacks, or fraud); and
  • Autonomy risks (e.g., risks arising from the reduction of human control and/or increased AI influence).

Each risk is of equal priority to the other risks. The Government outlined the measures it has implemented or will be implementing to address such risks, but did not provide information on whether binding measures may be imposed on organizations in order to mitigate such risks.

Future roadmap of AI regulation

Central and delegated AI regulation in parallel. A new multidisciplinary team to undertake cross-sectoral risk monitoring has been established within the Department for Science, Innovation and Technology (DSIT). This team will review cross-sectoral AI risks and evaluate the effectiveness of governmental and regulatory intervention.

In addition, the UK Government will establish a steering committee to support and guide the activities of a formal regulator coordination structure within the Government before the summer of 2024. However, this does not mean the Government has abandoned its initial approach of delegated AI regulation, and it will continue to empower sector-specific regulators (i.e., through the announcement of a £10 million package to boost regulators' AI capabilities) to regulate AI on a sector-by-sector basis.

Upcoming guidance in the spring. A plethora of upcoming AI guidance will be forthcoming this spring. This includes guidance on AI assurance to help businesses and organizations understand how to build safe and trustworthy AI systems, and a potential AI cybersecurity code of practice.

Regulators such as the UK Office of Gas and Electricity Markets (Ofgem), the UK data protection regulator and the UK Financial Conduct Authority will also be publishing their respective approaches to AI regulation in the spring of 2024 (i.e., by the end of April at the latest), and this guidance will include a 12-month plan of their respective activities relating to AI regulation.

A future UK AI Act? The Government indicated that while UK AI legislation would not be imminent, it would introduce legislation if: (i) the Government was not sufficiently confident that voluntary measures would be implemented effectively by all parties; (ii) if existing legal powers could not effectively mitigate AI risks; and (iii) such legislation is capable of significantly mitigating risk without unduly affecting innovation and competition.

Commentary

By continuing to pursue a light touch approach, the UK continues to distinguish itself from the EU's position on the fundamental question of how to regulate AI. However, this week's developments indicate that the UK's approach to regulating AI may have been influenced by the EU in some respects. In particular, the new centralised AI regulatory functions in the UK Government replicate some of the supervisory functions of its recently established EU counterpart, the EU AI Office (for more information on the EU AI Office, please see our previous article here).

It is also worthwhile noting that this roadmap is not a conclusive outline of the future of AI regulation in the UK. In addition to regulators' upcoming AI guidance, a general election is also anticipated in 2024. While the UK Labour Party favours AI regulation, it remains to be seen whether a Labour government would overhaul the UK's current regulatory approach to AI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.