UK Requests Views On Two New Codes Of Practice To Boost Cybersecurity In AI Models And Software

Finnegan, Henderson, Farabow, Garrett & Dunner, LLP


Finnegan, Henderson, Farabow, Garrett & Dunner, LLP is a law firm dedicated to advancing ideas, discoveries, and innovations that drive businesses around the world. From offices in the United States, Europe, and Asia, Finnegan works with leading innovators to protect, advocate, and leverage their most important intellectual property (IP) assets.
The UK government announced two voluntary cybersecurity codes of practice for AI models and software as part of its £2.6 billion National Cyber Strategy. Public consultations on these codes will run until 9 August 2024, aiming to enhance AI security and resilience.
UK Technology
To print this article, all you need is to be registered or login on

On 15 May 2024, the UK government announced two new voluntary codes of practice with the aim of improving cybersecurity in AI models and software. The two new proposals are part of the government's £2.6 billion National Cyber Strategy to increase cybersecurity and resilience. The UK government is seeking views from industry and the public on each of these codes of practice through two public consultations, each running between 15 May and 9 August 2024.

AI CyberSecurity Code of Practice and International Standard 

The AI industry in the UK contributes £3.7 billion to the economy as it brings innovation to many sectors, such as transport, agriculture, and crime prevention. Whilst developers continue to explore the ever-evolving capabilities of AI, AI is vulnerable to being exploited, which could result in increased privacy breaches and the loss of data.

The code of practice sets baseline cybersecurity requirements for all AI technologies and distinguishes actions that need to be taken by different stakeholders across the AI supply chain to protect end users, with a particular focus on developers and system operators. Of key importance, the code provides practical support to developers on how to implement a secure-by-design approach as part of their AI design and development process, to ensure cybersecurity is built into the design of AI from the very start. 

The code applies to all AI technologies. with the aim of ensuring that security is effectively built into AI models and systems across the AI lifecycle. The government is proposing that the AI Code of Practice be taken into a global standards development organisation for further development and to set baseline security requirements for stakeholders in the AI supply chain.

As currently drafted, there are principles associated with the concepts of secure design, secure development, secure deployment, and secure maintenance. The principles associated with secure design address raising staff awareness of threats and risks to AI; designing systems for security as well as performance and functionality modelling threats to the system; and ensuring user interactions are informed by AI-specific risks. As it relates to secure development, the code outlines principles related to asset identification and protection; infrastructure security; supply chain security; data, model, and prompt documentation; and testing and evaluation. The principle for secure deployment covers communication and processes associated with end-users, and the principles for secure maintenance address maintaining regular security updates for AI models and systems as well as monitoring the behaviour of systems.

Code of Practice for Software Vendors

Over the last year, half of businesses and a third of charities reported cyber breaches or attacks, with phishing being the most common type of breach. The Code of Practice for Software Vendors is designed to ensure that software security is fundamental to software development and distribution, and outlines the fundamental security and resilience measures expected of organizations that develop or sell software.

This code reflects the understanding that strong cybersecurity practices protect the foundation of technology products as well as new technologies such as AI, as software is an integral part of how AI models and systems function. It aims to ensure that organisations developing and/or selling software, or products containing software, prioritise security and resilience in the design of their products, as well as maintaining security throughout the lifetime of the product. The code further aims to ensure that vendor organisations provide sufficient information to customers to enable effective risk and incident management.

This principles-based code consists of four overarching principles, further expounded upon across twenty-one provisions. The principles are as follows:

  1. Secure design and development
  2. Build environment securely
  3. Secure deployment and maintenance
  4. Communication with customers

Next Steps

Notably, the codes sit in support of the UK government's wider efforts for AI and regulations, such as the UK data protection law. Whilst the proposed codes of practice would be voluntary, the UK government will, whilst working with interested stakeholders, monitor and evaluate uptake of the codes to determine if regulatory action is needed for AI in the future.

In the meantime, the UK government will be accepting public feedback and recommendations on the draft codes until 9 August 2024.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More