What Will The Next UK Government Do About Regulating And Supporting AI?

The UK's major parties propose different AI approaches: Conservatives maintain a light-touch regulation, while Labour plans to legislate AI safety and regulate deepfakes. Both parties support AI in healthcare, digital infrastructure, and public sector efficiency.
European Union Technology
To print this article, all you need is to be registered or login on Mondaq.com.

What approach is taken to artificial intelligence in the manifestos of the two main parties?

While there is a great deal of discussion on "What to do about AI" at national and international levels, the UK has so far taken a very light touch and simply asked existing regulators to develop an AI strategy based on their existing powers, shaped by non-statutory "high level principles".

We review the manifestos and statements of the two major parties fighting to win, to see whether we should expect change after the general election on 4 July.

No overarching AI regulation

The Science, Innovation and Technology Committee's report on AI Governance (published shortly before the last Parliament was dissolved) urged the next government to complete the gap analysis of current regulatory coverage of AI, and to implement findings as a priority. There is no reference to this work in the manifestos. There is also no sign that either the Conservative or Labour parties are planning to introduce comprehensive cross-sector AI regulation in the style of the EU's AI Act.

The UK would, of course, fall within the scope of the EU AI Act were the UK to rejoin the EU single market, as parties including the Liberal Democrats, Green Party and Scottish National Party wish to do. However, since both the main parties have ruled this out, it is very unlikely in the foreseeable future. The AI Act will therefore remain the law of a third party jurisdiction, but directly relevant to UK businesses selling into the EU or wider European Economic Area once it comes force next year.

The remit for the Labour Party's proposed Regulatory Innovation Office would apply across all sector and economic regulators and include their work on AI. It is not yet clear whether this office would also take on the role of overseeing the patchwork of regulation of AI to ensure cohesion and consistency (as called for by the AI Governance report).

AI safety commitments

The Labour Party has stated that it will put voluntary AI safety commitments made by major tech companies onto a statutory footing. It has stated that while it does not plan to disrupt the existing voluntary codes, it will make sure that they are adhered to. In particular, it will require developers of "frontier AI" to release their safety data. The manifesto simply states that it will introduce regulation on "the handful of companies developing the most powerful AI models" in order to ensure that AI models are safely developed and used.

The Department for Science, Innovation and Technology (DSIT) has been working on AI legislation for some time. Its scope is unclear, but the current government's response to the AI white paper flagged that legislation in relation to highly capable general purpose AI might be necessary in the future if "existing mitigations", voluntary commitments and existing regulation were not proving effective or sufficient. Notwithstanding this work, the Conservative manifesto makes no reference to new legislation in this area (seemingly in line with the white paper response of waiting until legislation is necessary).

The AI Governance report recommendations included a call for the next government to be ready to issue such legislation.

Even if the next government does draw on DSIT work already done, commentators have noted that legislation around technology is typically complex to draft. It is therefore considered unlikely that any AI legislation would be ready in time to be included in the legislative programme to be announced in the King's Speech on 17 July.


Both the Labour and Conservative manifestos state that they will introduce legislation to ban deepfakes where they are sexually explicit (Labour) or sexualised (Conservatives). This is not as simple as it might look at first sight. While a deepfake might be malicious and unacceptable in one context, it might be satirical or artistic in another.

AI and the workforce

Another area where draft legislation has been called for is in relation to the impact of AI on the workforce.

Under the EU's AI Act, the use of AI systems in relation to recruitment, work allocation, monitoring and appraisal, and contract termination will be classified as "high risk" and subject to the full regulatory regime. Emotion-inference AI systems will be banned altogether in the workplace. The EU Platform Workers Directive will introduce further specific legislation in relation to the algorithmic management of platform workers.

In the UK, since there is no overarching regulator for employment, the current government's light-touch approach creates a potential hiatus in regulatory coverage of the impact of AI as regards the workplace. The current government issued guidance on the use of AI systems in the recruitment cycle, setting out principles which could be applied to the employment life cycle more generally, but compliance is not mandatory.

In April 2024, the Trade Union Congress published its proposed draft Artificial Intelligence (Employment and Regulation) Bill, intended to create legal protections for workers and employers in relation to the use of AI. Much of what is proposed would create a broadly similar regime to that in the EU.

The Labour Party manifesto includes discussion of extensive changes to employment laws, but there is no explicit reference to legislating around the impact of AI on workers. Its "Plan to Make Work Pay" document references working with "workers and their trade unions, employers and experts" to consider the impact of AI on "work, jobs and skills". It may be that legislation would emerge from those discussions but certainly does not appear imminent.

AI and intellectual property

The Conservative manifesto flags that it will continue to try to resolve the conflict between content rights holders and AI developers wanting to use (often web-scraped) data to train their models. The current government failed to find a middle way, having initially hoped to broker a voluntary code to address this issue. The manifesto indicates that it would seek to ensure protection and remuneration for rights holders while supporting the AI industry.

There is no comment in the Labour manifesto on this issue.

Finding a fair and sustainable resolution to this issue was identified by the AI Governance report as one of the twelve specific challenges that the next government needs to address.

Data and digital infrastructure

The Labour manifesto includes a couple of commitments that potentially feed directly into supporting growth of the AI sector.

First, it pledges to reform the UK planning process to reduce the barriers to building digital infrastructure – specifically data centres. AI development typically needs significant compute capacity both for training AI systems and to host AI models and systems that are accessed "as a Service" via the cloud. The Conservative manifesto discusses further funding for "large scale compute clusters" to support AI safety research.

Second, the Labour manifesto proposes a "National Data Library" which would "bring together existing research programmes and help deliver data-driven public services". Training data is an essential raw material for AI development, so any opening up of access to the extensive public sector databanks is likely to be welcomed. The Conservative manifesto discusses making public sector data more comparable but with a view to facilitating holding public authorities to account, rather than to augmenting data resources as fuel for the AI sector.

Access to compute power and to data were both identified in the AI Governance report as part of the twelve challenges of AI governance needing resolution by the next government.

AI and the Life Sciences and Healthcare sector

In terms of support for particular sectors, both manifestos flag up the scope for boosting the work of the National Health Service using AI.

The Labour Party promises to use AI to speed up diagnostic services and make them more accurate, noting in particular that scanners with embedded AI can save lives by spotting cancer earlier. The Conservative Party manifesto flags up the potential to use AI to free up the time of doctors and nurses for frontline care, and promises "new digital health checks" to prevent strokes and heart attacks.

Both parties also reference implementing faster regulatory approval processes for medical technology.

It is clear both parties plan to make significant investment in digital technology in this sector, seeking in particular to leverage the power and efficiency of AI.

AI in government

Both the major parties' manifestos flag up the intention to expand the use of AI in the public sector to drive efficiency. The Conservative Party manifesto states that it will double digital and AI expertise in the civil service. The Labour Party highlights the opportunities from harnessing new technology in the public sector.

These ambitions will likely necessitate significant relationships between government departments and AI developers and providers. The Labour manifesto also pledges to reform public procurement processes to ensure smaller businesses are not closed out of tendering for government contracts.

Osborne Clarke comment

While it appears very unlikely that new legislation concerning AI will be proposed in the next King's Speech on 17 July, equally it cannot be ruled out in future if, as is widely expected, the Labour Party wins the next election. This would most likely concern AI safety but there are also signs that a Labour government might want to address the impact of AI on workers and the workplace in due course.

While the Labour manifesto does not address the point, it is clear that the next government will need to find a way through the current impasse between rightsholders and AI developers. It has been looking increasingly unlikely that a voluntary code would be acceptable, so we may also see legislation on this issue.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More