ARTICLE
6 September 2024

The UK's AI Framework: Still In Developmental Stages

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
On July 17, 2024, the United Kingdom's (UK's) new Labour Government's plans for the next parliamentary term were presented through the King's Speech.
United Kingdom Technology

Introduction

On July 17, 2024, the United Kingdom's (UK's) new Labour Government's plans for the next parliamentary term were presented through the King's Speech. Despite reports circulating prior to the King's Speech that the government was expected to introduce the long-awaited Artificial Intelligence Bill (UK AI Bill), it did not do so, instead acknowledging that his government would "seek to harness the power of artificial intelligence as we look to strengthen safety frameworks", and further would "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."

The lack of any specific legislative reform being announced in the King's Speech gives rise to speculation as to what regulatory framework the UK will introduce in relation to AI. This blog post: (i) reviews the UK's current framework on AI regulation; and (ii) identifies what steps we might expect the UK's new Government to take in regulating AI.

I. Current AI Regulatory Framework in the UK

As it stands, there is no specific AI regulation in the UK. The prior Conservative Government took a cautious approach to regulating AI, fearing it would slow AI innovation, and opting instead for a principles-based "pro-innovation" approach. As part of the prior Government's relatively hands-off approach to regulation, in its consultation response "A pro-innovation approach to AI regulation" in February 2024, the government developed a non-binding, sector specific, principles-based framework1 which enables regulators of particular sectors to apply bespoke measures within their respective remits, tailoring their approach to the needs and risks of particular sectors.

The five core principles articulated in the framework are: (i) safety, security and robustness, (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress. The prior Government's aim was to seek a balance between innovation and safety, while remaining agile enough to "deal with the unprecedented speed of development."

Additionally, late last year, the prior Government launched the AI Safety Institute (AISI) whose purpose is to evaluate AI models for risks and vulnerabilities, by testing the safety of emerging AI. Consistent with the prior Government's approach, however, companies were not obliged to provide information to the AISI, but could do so voluntarily. The AISI's role included developing technical expertise to "understand the capabilities and risks of AI systems, informing the government's broader actions" and working with a consortium of AI experts (such as developers, academics and members of civil society). The AISI operates as a research organization within the UK's Department for Science, Innovation, and Technology.

The prior Government did however plan to introduce targeted, binding regulations driven by the findings of the AISI, on those companies developing highly capable "general purpose AI models" (GPAI models) to "ensure that [those companies] are accountable for making these technologies sufficiently safe." The status of any such regulation is, however, unclear given the new Labour Government.

Attempts at more affirmative AI regulation in the UK have not succeeded. For example, an Artificial Intelligence (Regulation) Bill was proposed in the House of Lords in November 2023, but the Bill had not passed by the time the Parliament term was dissolved in May 2024 for the UK General Election, and it will not progress further.

Accordingly, companies operating or domiciled in the UK that are developing AI models should be guided by the principles set out above, as articulated in the Response to the White Paper, until such a time that any regulation on AI is introduced.

II. What is to come? UK AI Regulation Going Forward

The New Government's Position

Prior to the UK election in July 2024, the Labour Party released its election manifesto, which outlined several key initiatives on AI that it intended to implement if elected, including:

  • introducing "binding regulation on the handful of companies developing the most powerful AI models";
  • creating a new Regulatory Innovation Office, consolidating existing government functions. The proposal includes that the Office will help regulators "update regulation, speed up approval timelines, and co-ordinate issues that span existing boundaries"; and
  • banning the "creation of sexually explicit deepfakes."2

The manifesto's focus on "binding [...] companies developing the most powerful AI models" is echoed in the King's Speech which articulated the new Government's intention to "establish the appropriate legislation [...] on those working to develop the most powerful artificial intelligence models." The new Government's approach is primarily targeting a select group of companies developing the most sophisticated AI models. The statements in both the manifesto and the King's Speech reflect the new Government's tendency towards stricter regulation, but also narrower regulation; potentially applicable to a smaller set of entities.

The new UK Government also appears keen to encourage the development of AI models, evidenced by its AI Opportunities Action Plan (Action Plan) released on July 26, 2024, which sets out a "roadmap for government to capture the opportunities of AI to enhance growth and productivity and create tangible benefits for UK citizens."3 The Action Plan considers how the UK can: (i) build an AI sector that can "scale and be competitive globally"; (ii) adopt AI to "enhance growth and productivity and support [its] delivery across government"; (iii) use AI to "transform citizens' experiences" with the Government encouraging AI application across the public sector and wider economy; and (iv) enable the use and adoption of AI by supporting "data infrastructure, public procurement processes and policy and regulatory reforms."

In recent months, the new Minister for the Department of Science, Innovation and Technology (DSIT), the Rt Hon Peter Kyle, has expanded on the Government's intended approach to AI regulation. In an interview with the BBC, Mr. Kyle outlined that the Government would impose a "statutory code" requiring companies developing AI to share safety test data with the Government and the AISI. Mr. Kyle and DSIT Junior Minister Baroness Jones have also separately indicated that a UK AI Bill will: (i) make voluntary agreements between companies and the government legally binding; and (ii) that the new legislation will "place the AI Safety Institute on a statutory footing, providing it with a permanent remit to enhance the safety of AI."4 However, he has also reassured major technology companies that when the UK AI Bill is implemented, its focus on the "most advanced models" would not limit the industry and AI development.

What We Can Expect from New UK AI Bill v. the EU AI Act

In light of the above, it is likely that the UK AI Bill will be less comprehensive and restrictive than the EU AI Act. [For further information on the EU AI Act see Steptoe's Article "The EU AI Act Is Formally Adopted: 5 Reasons Why Organizations Must Care"]. It is also of note that most of the discussion of the UK AI Bill to date has centered around regulating organizations that develop AI models. It is yet to be seen what regulation may be applied to the distribution, use, and application of those models, in sharp contrast to the EU AI Act, which applies to those broader activities.

The UK AI Bill may, however, reflect certain elements of the EU AI Act. For example, it may include obligations similar to those requiring developers to maintain detailed logs of safety testing under Article 11(1) and Annex IV of the EU AI Act. Further, the new Government's approach to targeting the "most powerful AI models," echoes the GPAI models' categorization under the EU AI Act, particularly with respect to those models that may pose a systemic risk.

Finally, the new Government has indicated a clear intention to strengthen its relations with the EU,5 and the pending UK AI Bill may be an opportunity to do just that by aligning it with the EU AI Act to promote interoperable reporting systems between the jurisdictions. Indeed, on July 30 2024, Baroness Jones specifically noted that while the new Government intends to bring forward "highly targeted legislation that focuses on the safety risks posed by the most powerful models" it will remain "committed to working closely with the EU on AI and [further] co-ordinating with international partners - the EU, the US and other global allies - [in order to make] sure that these measures are effective."

Conclusion and next steps

While much is still unknown about the UK's approach to AI regulation, it seems clear that the new Government's approach will be more targeted and narrower than previously anticipated by some. On the other hand, the new Government appears to be moving away from the prior Government's pro-innovation, laissez-faire tactic and will bind companies to their agreements with the Government, particularly in respect of sharing test data from advanced AI models. In essence, the new Government looks to build on the prior Government's approach, but goes further by creating statutorily binding obligations, while committing to fostering innovation. Indeed, as stated in the King's Speech: "We are dedicated to advancing our technological frontiers, ensuring that AI development aligns with national interests and public welfare."

Footnotes

1. https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response

2. This issue seems to be covered by the Online Safety Act for the time being, which sets out a number of requirements for social media platforms to remove illegal misinformation and disinformation, including where it is AI-generated, as soon as it becomes available.

3. https://www.gov.uk/government/publications/artificial-intelligence-ai-opportunities-action-plan-terms-of-reference/artificial-intelligence-ai-opportunities-action-plan-terms-of-reference#process

4. Further, during a parliamentary debate on 30 July 2024, Baroness Jones confirmed that the new Government will "establish legislation to ensure the safe development of AI models by introducing targeted requirements on a handful of companies developing the most powerful AI systems".https://hansard.parliament.uk/lords/2024-07-30/debates/C1541E2E-0AE3-486C-9077-42CBA1785164/AITechnologyRegulations

5. The Kings Speech for example refers to the Government seeking to "reset the relationship with European partners and work to improve the UK's trade and Investment relationship with the European Union"

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More