ARTICLE
8 September 2025

How To Achieve Cybersecurity Compliance With The EU AI Act

AC
Ankura Consulting Group LLC

Contributor

Ankura Consulting Group, LLC is an independent global expert services and advisory firm that delivers end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation. Ankura consists of more than 1,800 professionals and has served 3,000+ clients across 55 countries. Collaborative lateral thinking, hard-earned experience, and multidisciplinary capabilities drive results and Ankura is unrivalled in its ability to assist clients to Protect, Create, and Recover Value. For more information, please visit, ankura.com.
In our initial article in this series, "Navigating AI Regulation Globally: A Guide for Developers and Security Experts," we offered a comprehensive overview...
United States Colorado Technology

In our initial article in this series, "Navigating AI Regulation Globally: A Guide for Developers and Security Experts," we offered a comprehensive overview for AI developers and security experts on the intricate landscape of global AI regulations. This article underscored the importance of comprehending and complying with regulations such as the EU AI Act and Colorado AI Act, which mandate ethical standards and accountability in the development and deployment of AI technologies.

Our second article in this series, "Navigating the Cybersecurity Requirements of the Colorado AI Law," guided organizations in implementing risk management policies and programs required by the Colorado AI Law for high-risk AI systems, emphasizing alignment with the NIST AI Risk Management Framework (NIST AI RMF) and equivalent standards like ISO 42001. It highlighted the importance of continuous testing, incident identification, and monitoring to enhance the trustworthiness and responsible deployment of AI systems.

This article sets out the specific cybersecurity requirements in the EU AI Act, whereby the provisions for high-risk AI become enforceable in August 2026.

The EU AI Act, Chapter 2, Articles 9-15 titled "Requirements for High-Risk AI Systems" outlines the requirements for high-risk AI systems. Many of these requirements will drive enhancements to an organization's cybersecurity program. Specifically, within Chapter 2 of the EU AI Act, Articles 9-15 require the following:

  • Article 9 mandates providers to implement documented risk management systems, addressing potential risks and misuse through testing protocols.
  • Article 10 focuses on data governance, requiring protocols for model training, validation, and testing to address biases and data gaps.
  • Article 11 requires technical documentation to ensure compliance before market placement.
  • Article 12 specifies automatic logging of events for high-risk systems, including usage times, database references, and input matches.
  • Article 13 emphasizes transparency, requiring systems to provide clear instructions for users, documenting accuracy, robustness, and cybersecurity.
  • Article 14 requires human oversight capabilities, ensuring users can understand, interpret, and control AI systems.
  • Article 15 of the EU AI Act outlines requirements for high-risk AI systems to ensure accuracy, robustness, and cybersecurity throughout their lifecycle. These systems must be designed to achieve appropriate levels of accuracy, declared in their usage instructions, and be resilient against errors and inconsistencies arising from system interactions or environmental factors. Robustness can be enhanced through technical redundancy measures, such as backup or fail-safe plans. Systems that continue learning after deployment must address feedback loops to mitigate biased outputs. Furthermore, high-risk AI systems must be safeguarded against unauthorized attempts to exploit vulnerabilities, with technical solutions tailored to the specific risks and circumstances. Measures should include preventing and controlling attacks like data poisoning, adversarial examples, or model flaws to maintain the integrity and performance of AI systems.

Approaches to Implementing Continuous Monitoring of AI Models

In order to comply with the EU AI Act, organizations must establish robust cybersecurity solutions that facilitate comprehensive testing, incident identification, and continuous monitoring of their AI systems. A key focus should be placed on deploying strategies that effectively prevent adversarial attacks, such as prompt injection attacks, backdoor insertion, data poisoning, and training data extraction. To ensure thorough tracking, efficient response, and swift recovery from incidents, these solutions should be complemented by detailed metrics and benchmark reports.

Ankura and ODIN

0DIN offers advanced continuous monitoring solutions capable of scanning any large language model (LLM) through either on-premise or SaaS-based continuous scanners. These scanners execute threat intelligence probes across models and providers on an hourly, daily, or continuous integration/continuous deployment basis. By employing interactive dashboards, heat maps, and model comparisons, organizations can effectively quantify and automatically mitigate risks associated with generative AI.

Ankura and 0DIN have joined forces to deliver state-of-the-art AI testing and monitoring solutions to their clientele. For further information, please reach out to the Ankura Cybersecurity Team or your 0DIN account executive.

This article was first published with 0DIN.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More