Want to learn more about drafting, negotiating, and understanding intellectual property and technology contracts and have 10 minutes to spare? Grab your morning coffee or afternoon tea and dig into our Tech Contract Quick Bytes—small servings of technical contract insights expertly prepared by our seasoned attorneys. This month, we're serving up a discussion on the Colorado AI Act.
In the second quarter of 2024, Colorado governor Jared Polis signed the Colorado Artificial Intelligence Act (CAIA), legislation regulating "high-risk" systems of artificial intelligence. In general, when the law goes into effect in 2026, it will impose internal governance and disclosure/reporting requirements on companies that develop or use AI systems in making decisions susceptible to algorithmic discrimination, such as when those systems are used in employment, lending, and housing services.
Consequently, these disclosure requirements may impact contracting around the regulated systems. CAIA regulates "high-risk" AI systems, which it defines as machine-based systems used to make a consequential decision about a consumer. Consequential decisions are limited to those that have a material effect on the provision or denial of educational, employment, financial, healthcare, housing, insurance, government, or legal services.
Specifically, developers and deployers of high-risk AI systems are subject to CAIA. "Developers" includes anyone doing business in Colorado that "develops or intentionally and substantially modifies" an AI system. A "deployer" is anyone doing business in the state who uses a high-risk AI system.
After CAIA goes into effect, the Colorado attorney general will have exclusive authority to enforce it (i.e., no private right of action) and could find developer and deployer companies in violation if they do not use "reasonable care" in protecting users from discrimination. But CAIA provides a rebuttable presumption of reasonable care if regulated companies follow certain risk-management and disclosure requirements.
Disclosure Requirements
Developers
- Developers must make certain disclosures to other developers and deployers using its high-risk AI system. For example, the disclosure requirements include a description of the reasonably foreseeable uses and known harmful or inappropriate uses of the system.
- CAIA also requires developers to publish a statement on their website (or in a public use case inventory) related to their use of high-risk AI. This statement must cover the types of high-risk AI systems the developer makes available to others and how the developer manages risks of discrimination that arise in the development process.
Deployers
- When a deployer begins using a high-risk system to make consequential decisions, it must directly notify the consumer of such use. This notice must describe the high-risk system, its purpose, the nature of the consequential decision being made, the deployer's contact information, and how to access more information on the deployer's website.
- The notification must be direct to the consumer, in plain language, in multiple languages, and in a format accessible to consumers with disabilities. If direct notification is impossible, deployers must make the disclosures available in a way that is reasonably calculated to reach the consumer.
- Deployers must also separately notify consumers after making an adverse consequential decision about them and provide an opportunity to correct and appeal the decision.
Impact on Contracting
While CAIA requires significant exchanges of information between the developers, deployers, and consumers of high-risk AI, the law is not prescriptive as to how and where disclosures are made. Some conditions, like permissible uses, risks, and parties' rights, are common in licenses and terms of service. Following January 1, 2026, developers or deployers may choose to integrate CAIA's mandated disclosures through its online agreements. But note that other disclosures must be separate from standard contracts, such as deployers' adverse action notifications.
Overall, CAIA's primary goal is to prevent algorithmic discrimination. Here, Colorado is concerned about how parties are mitigating consumer risk, not the specific language used to do so. So, AI companies should keep the state's interests in mind when contracting and when composing the mandated disclosures through its contracts.
Special thanks to Michaela Bevan for assistance with this article.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.