Colorado became the first U.S. state to pass a law protecting consumers from harm when using artificial intelligence (AI). Senate Bill 24-205 on Consumer Protections for Artificial Intelligence was passed on May 17, 2024. The regulation will be applicable to any organization or business that creates, develops, or uses AI in their products or services and can be used by or affect any individuals in the state of Colorado.
Colorado AI Law
The Colorado regulation requires developers of high-risk AI systems to take reasonable care in preventing algorithmic discrimination.
- Where developers are considered creators of AI systems that can intentionally or substantially modify the AI system resulting in algorithmic discrimination;
- Where algorithmic discrimination is defined as unlawful treatment or impact to individuals in protected classifications such as age, race, religion, etc.; and
- Where reasonable care is an obligation of the developer to
comply with specific provisions of the law. Obligations include:
- Disclosing specific information about the high-risk AI system to deployers - businesses using high-risk AI systems;
- Completing AI impact assessments that can be available to deployers;
- Disclosing the AI systems developed for deployers on how the developer manages any risk of algorithmic discrimination that can arise from the modification of each high-risk AI system; and
- Disclosing to the attorney general and deployers of the high-risk AI system within 90 days after the discovery or report from the deployer that the high-risk AI system has caused algorithmic discrimination.
In addition, similar to developers, deployers of high-risk AI systems should use reasonable care to avoid algorithmic discrimination as well by complying with obligations including:
- Implementing risk management policy and program for high-risk AI systems.
- Conducting AI impact assessments and ensuring no algorithmic discrimination is deployed for high-risk AI systems.
- Notifying consumers if high-risk AI systems make consequential decisions about them.
- Providing consumers individual rights to opt out, correct, or appeal the use of their personal data in high-risk AI systems.
- Disclosing the types of high-risk systems deployed, any known risk of algorithmic discrimination, and the information collected and used by deployers in the AI systems.
- Disclosing to the attorney general the discovery of algorithmic discrimination within 90 days that the high-risk AI system has caused.
Utilize the NIST AI RMF To Prepare for the Colorado AI Law
Organizations impacted by the regulation should conduct a regulatory compliance assessment of their AI program and AI systems. Organizations should use a framework such as the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF). The NIST AI RMF consists of 19 categories and 72 subcategories. Assessing your organization against the categories and/or subcategories of the NIST AI RMF will allow you to identify risks in your AI systems and define your AI program and AI governance posture. Additionally, the NIST AI RMF Categories can be mapped to some of the requirements in the Colorado AI Law such as:
Colorado AI Law Requirements |
NIST AI RMF Category |
---|---|
AI impact assessments covered on AI high-risk AI systems annually. | MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data. |
MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized. | |
MEASURE 2: AI systems are evaluated for trustworthy characteristics. | |
MEASURE 3: Mechanisms for tracking identified AI risks over time are in place. | |
MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed. | |
MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly. | |
Transparency of AI systems through notification including disclosure of purpose, consequence of decision, deployer, etc. | GOVERN 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively. |
MAP 1: Context is established and understood. | |
MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood. | |
AI governance programs including risk management policy and program. | GOVERN 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks. |
GOVERN 4: Organizational teams are committed to a culture that considers and communicates AI risk. | |
GOVERN 5: Processes are in place for robust engagement with relevant AI actors. | |
Incident reporting where if AI systems risk or cause for algorithmic discrimination occurs, disclosure must be conducted within 90 days of incident. | MEASURE 4: Feedback about efficacy of measurement is gathered and assessed. |
MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors. | |
MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly |
Conduct AI Management Activities for Compliance
After assessing your organization's governance and AI compliance posture, you may consider implementing AI management activities such as those outlined below to remediate gaps and support AI maturity.
- Create a data inventory or data map of all your AI systems including the types of data used, anticipated training models and outcomes, and who you will share the data, training model and AI system, and output of the AI system with.
- Create or leverage Privacy Impact Assessment processes and include AI risk analysis, risks and mitigation processes, and conduct AI impact assessments, especially for high-risk AI systems.
- Create a transparent notice for consumers and/or deployers on the use of AI, the risks of AI systems, who you share the data and AI system output with, and how you test, monitor, and protect the data and AI systems.
- Create or modify individual privacy rights request processes to include rights specific to AI system deletion, correction, appeal, or opt-out of its use for an individual's data.
- Create or modify third-party due diligence processes with AI developers and partners where AI systems are procured, used, or shared.
- Integrate AI into Privacy-by-Design processes and policies to ensure proper development, testing, monitoring, and implementation of AI systems are followed.
- Integrate AI security and technical safeguards into Information Security programs including, access controls, de-identification techniques, and incident response plans.
These AI risk management activities are critical for organizations to ensure they effectively manage AI governance and compliance where high-risk AI systems are developed or deployed. An organization should consider leveraging the NIST AI RMF by assessing and implementing these foundational activities which will support compliance with the Colorado AI law.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.