The UK Government's consultation on questions contained in its AI white paper for the implementation of a pro-innovation approach for AI regulation has now closed and further guidance on how to achieve the five principles (listed below) is expected. With these principles in place, the aim is for consumers to have more confidence to use AI products and provide businesses the clarity they need to invest and innovate within this space.

The UK is focusing on a more agile framework for governing AI rather than on strict legislation and will empower existing regulators to put the approach into practice in a way that they see fit. This is in contrast to the European Union, which is in the process of finalising its draft AI Act. The two differing approaches may present challenges given the cross-border nature of AI supply chains so companies operating in both markets must ensure a clear understanding of the developing rules to comply in their use and development of AI technologies.

Background

Our previous blog on the Overhaul of AI regulatory framework in the UK reported that the Government is committed to regulating and developing a pro-innovation national position on governing AI, as outlined in the AI Policy Paper in 2022.

On 29 March 2023, the Government launched the first AI white paper to guide and enforce the use of AI in the UK, to drive innovation and to maintain public trust during the ever-growing development of AI technology. Considering that the UK is ranked third in the world for AI publications and that AI is already contributing £3.7 billion to the UK economy at present, the UK's strategy is to contribute to the implementation of AI in order to grow innovation, create jobs and other benefits.

White paper guideline principles

The white paper comes in response to the questions raised about the future risks AI could pose to people's privacy, human rights or safety (such as the fairness of using AI tools to make decisions which impact people's lives, like assessing the worthiness of loan or mortgage applications). The current patchwork of legal rules regarding AI causes confusion as well as financial and administrative burdens for businesses trying to comply with them, which results in organisations being held back from using AI to its full potential.

The Government plans to empower existing regulators (i.e., the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority) to develop tailored and context-specific approaches suitable for the way in which AI is actually being used in their sectors. There will be reliance on existing legislation (such as the Data Protection Act 2018 and the Equality Act 2010) to implement the framework.

The AI regulation white paper aims to help create the right environment for AI to develop responsibly and safely in the UK by following the five key principles:

  1. safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed;
  2. transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI;
  3. fairness: AI should be used in a way which complies with the UK's existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes;
  4. accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes;
  5. contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI.

Next steps

The white paper consultation has now closed and during the next 12 months, UK regulators will issue practical guidance, including risk assessment templates, to organisations to aid with the implementation of the above principles in their sectors.

An allocation of £2 million will fund a new sandbox; a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.

The UK's approach contrasts the EU's approach, which is focusing on more detailed and stringent regulation via legislation aimed around AI being used ethically and in the public interest. See our discussion regarding the latest developments on the EU's draft AI Act here.

Related content

Find our previous blogs on the:

  • Overhaul of AI regulatory framework in the UK here; and
  • UK's national strategy following the EU's AI Act here.

Find the:

  • AI regulation white paper here;
  • Government's press release here;
  • AI Policy Paper here; and
  • UK Government consultation page here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.