There currently is no specific legislation in the UK that governs artificial intelligence (AI) or its use in various sectors. Instead, a number of general-purpose laws apply. These laws, such as the rules on data protection, employment and credit discrimination, and medical devices, have to be adapted to specific AI technologies and uses. They sometimes overlap, which can cause confusion for businesses trying to identify the relevant requirements or to reconcile potentially conflicting ones.

As a step toward a clearer, more coherent approach, on July18, the UK government published a policy paper proposing a pro-innovation framework of principles for regulating AI in the UK, while leaving regulatory authorities discretion over how the principles apply in their respective sectors. The government intends the framework to be "proportionate, light-touch and forward-looking" to ensure that it can keep pace with developments in these technologies, and so that it can "support responsible innovation in AI - unleashing the full potential of new technologies, while keeping people safe and secure." This balance is aimed at ensuring that the UK is at the forefront of such developments.

The government's proposal dovetails with the UK Digital Regulation Cooperation Forum's (DRCF) plans for governing and auditing algorithms. (The DRCF is a joint initiative among the major UK regulators touching on digital services: the Competition and Markets Authority (CMA), the Financial Conduct Authority (FCA), the Information Commissioner's Office (ICO), and Ofcom, the UK communications regulator.)

The government's proposal also is broadly in line with the Medicines and Healthcare Products Regulatory Agency's (MHRA) current approach to regulating AI. In the MHRA's response to the consultation on the medical devices regime in the UK post-Brexit, it announced similarly broad-brush plans for regulating AI-enabled medical devices.

Scope

Rather than provide a fixed definition of AI and software, the UK government proposes to identify the core characteristics and capabilities of AI to inform the regulatory framework. These core characteristics could include:

  • the "adaptiveness" of the technology, and the fact AI systems are "trained" on data, and execute according to patterns and connections that are not easily discernible to humans; and
  • the "autonomy" of the technology, and the fact decisions can be made without intent or control of a human.

Regulators then would be charged with forming and updating AI definitions that fit their specific domains or sectors. In the healthcare sector, this could include the extent to which diagnostic decisions are made by software, without the involvement of healthcare professionals, for example. In employment and education, by contrast, the definitions could be targeted to screening for hiring (or admission, in education), advancement, or performance evaluations.

A Pro-Innovation Approach

The government plans to focus on the specific context in which AI is being used and to take a proportionate, risk-based response. The contextual focus is intended to foster "a targeted and nuanced response to" AI-related risk at the application level by allowing regulators flexibility to treat different uses of AI differently, rather than dictating "one-size-fits-all" rules. Unlike the proposals for the EU AI Act, regulators, such as the MHRA, Equality and Human Rights Commission (EHRC), ICO, CMA, FCA, and Ofcom, would be encouraged to develop rules implementing certain cross-sector principles appropriately for their sector. The government also will engage with regulators to ensure that they proactively embed considerations of innovation, competition, and proportionality through their implementation and any subsequent enforcement of the framework. Nevertheless, it seems likely that AI uses with significant impacts on individuals (e.g., health outcomes, employment and educational opportunities, credit allocation, and online safety) will lead to greater oversight and regulation.

Cross-Sector Principles

The government proposes a set of cross-sector principles that regulators will develop into sector-specific measures. These principles would not be set out in legislation, but in guidance so that the framework can "remain agile enough to respond to the rapid pace of change in the way that AI impacts upon society." The government currently believes there should be six principles:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Embed considerations of fairness into AI
  • Define legal persons' responsibility for AI governance
  • Clarify routes to redress or contestability (i.e., how those subject to AI systems' decisions can challenge the outcomes)

Because not all regulators have the requisite statutory authority and flexibility to adapt and enforce these principles for their sectors, the government will consider what statutory changes may be needed.

Next Steps

The government's plans for AI governance remain at an early stage of development. With a change of Prime Minister and key government officials in the autumn, the proposal will be revisited by the new Ministers and may change significantly before any framework is put in place.

The government nonetheless seeks views on its current proposals, which it intends to flesh out in a more formal White Paper towards year end.

The call for views and evidence will be open for ten weeks, closing on September 26, 2022, and comments can be sent to evidence@officeforai.gov.uk. Businesses should consider this important opportunity to shape the AI regulations with which they will have to contend.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.