On the 27th of November 2023, the UK's National Cyber Security Centre ("NCSC") announced the new global cybersecurity guidelines entitled "Guidelines for Secure AI System Development", developed together with the US's Cybersecurity and Infrastructure Security Agency ("CISA").

In addition to the UK and the US, the guidelines are endorsed by national cybersecurity and intelligence agencies from 16 other countries, including all members of the G7 group of nations, Nigeria, Singapore, South Korea and Chile. The NCSC says that the guidelines will help AI system developers embed cybersecurity by design into their decision-making process at each developmental phase.


The guidelines are applicable to all types of AI systems, though they are voluntary. It should be noted that the proposed EU AI Act and AI Liability Directive (should they come into force) would impose minimum cybersecurity requirements on in-scope AI systems placed on the EU-market.

The Guidelines Purpose

The guidelines are designed to guide developers through the design, development, deployment and operation of AI systems and ensure that security remains a core focus throughout their life cycle. They are structured into four sections, each corresponding to the different stages of the AI system life cycle, as follows:

  1. Secure design covers the design phase of the AI system development cycle.It raises awareness of threats and risks, model system threats as well as balancing design for security as well as functionality and considering security when selecting the AI model.
  2. Secure development covers the development phase of the AI model. The guidelines focus on securing the supply chain, identifying, tracking and protecting the assets, documenting data, models and prompts, and managing technical debt effectively (e.g., by robust lifecycle control and mitigating in future development of similar AI systems).
  3. Secure deployment covers the deployment phase of the AI model. The guidelines involve safeguarding infrastructure, ensuring continuous protection of the model, developing compromise, threat or loss processes for continuous incident management, developing principles of responsible release and use by end-users.
  4. Secure operation and maintenance covers the operation and maintenance phase post-deployment of AI models. It covers the system's behaviour, logging and monitoring, managing update and sharing information, following a "secure by design" approach to updates and sharing lessons learned.

Links and related content

Find the guidelines here and the NCSC press release here.

For more information on the Proposed Directive on AI liability and AI Act, see our previous blog here and the press release here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.