ARTICLE
29 August 2025

Navigating The Cybersecurity Requirements Of The Colorado AI Law

AC
Ankura Consulting Group LLC

Contributor

Ankura Consulting Group, LLC is an independent global expert services and advisory firm that delivers end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation. Ankura consists of more than 1,800 professionals and has served 3,000+ clients across 55 countries. Collaborative lateral thinking, hard-earned experience, and multidisciplinary capabilities drive results and Ankura is unrivalled in its ability to assist clients to Protect, Create, and Recover Value. For more information, please visit, ankura.com.
In our first article, "Navigating AI Regulation Globally: A Guide for Developers and Security Experts," we provided an overview for artificial intelligence (AI) developers and security experts regarding...
United States Colorado Technology

In our first article, "Navigating AI Regulation Globally: A Guide for Developers and Security Experts," we provided an overview for artificial intelligence (AI) developers and security experts regarding the complexities of global AI regulations. Our first article highlighted the significance of understanding and complying with regulations like the European Union (EU) AI Act and Colorado AI Act, which enforce ethical standards and accountability in AI technology development and deployment.

This article sets out the specific cybersecurity requirements in the Colorado AI Law, which becomes enforceable in February 2026.

Compliance with the Colorado AI Law and the NIST AI Risk Management Framework

Pursuant to the Colorado AI Law, deployers of high-risk AI systems are required to implement a risk management policy and program to govern the deployment of high-risk systems such as employment and recruiting, financial and lending, or healthcare services. The Colorado AI Law specifically cites that the risk management policy and program should consider the guidance and standards set forth in the "Artificial Intelligence Risk Management Framework" published by the National Institute of Standards and Technology ("NIST AI RMF"), or other equivalent standards such as ISO 42001.1 In our experience, organizations with both high-risk AI systems, as defined by the Colorado AI Law, and customer-facing AI systems are striving to align with the NIST AI RMF.

The NIST AI RMF, published in January 2023, was designed to equip organizations with approaches that increase the trustworthiness of AI systems and to help foster the responsible design, development, deployment, and use of AI systems.2 The NIST AI Framework consists of 19 categories and 72 subcategories within the core functions of Govern, Map, Measure, and Manage.

Testing, Identification of Incidents, and Continuous Monitoring

Within the controls of the NIST AI RMF, there is a constant theme of testing, identification of incidents, and continuous monitoring, all of which involve cybersecurity program activities or enhancements. Examples of such NIST AI RMF subcategory controls include the following:

  • GOVERN 4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.
  • MEASURE 2.4: The functionality and behavior of the AI system and its components — as identified in the MAP function — are monitored when in production.
  • MEASURE 2.7: AI system security and resilience — as identified in the MAP function — are evaluated and documented.
  • MANAGE 3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.
  • MANAGE 3.2: Pre-trained models used for development are monitored as part of regular AI system monitoring and maintenance.
  • MANAGE 4.1: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.
  • MANAGE 4.3: Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.

Approaches to Securing Generative AI via Continuous Monitoring

To comply with the Colorado AI Law and meet the requirements of the NIST AI RMF, organizations need to implement security solutions that support testing, incident identification, and continuous monitoring of their AI systems. Specifically, organizations should focus on solutions to prevent adversarial attacks, including prompt injection attacks, backdoor insertion, data poisoning, and training data extraction. To support risks and issues tracking, responding to incidents and recovery and remediation, these cybersecurity solutions should also include metrics and benchmark reports.

Ankura and ODIN

0DIN's continuous monitoring solutions can scan any large language model (LLM) using an on-prem or SaaS continuous scanner, which runs threat intelligence probes across models and providers on an hourly, daily, or continuous integration/continuous deployment basis. Interactive dashboards, heat maps, and model comparisons help quantify and automatically reduce generative AI risks.

This article was first published with 0DIN.

Footnotes

1 Colorado AI Law. Section 6-1-1703.

2 NIST AI 100-1. Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 2.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More