ARTICLE
9 July 2024

Key Takeaways From Colorado's Consumer Protections For Artificial Intelligence Act

FH
Finnegan, Henderson, Farabow, Garrett & Dunner, LLP

Contributor

Finnegan, Henderson, Farabow, Garrett & Dunner, LLP is a law firm dedicated to advancing ideas, discoveries, and innovations that drive businesses around the world. From offices in the United States, Europe, and Asia, Finnegan works with leading innovators to protect, advocate, and leverage their most important intellectual property (IP) assets.
On May 17, 2024, Colorado enacted the Consumer Protections for Artificial Intelligence Act, a first-of-its-kind law in the United States.
United States Colorado Technology

On May 17, 2024, Colorado enacted the Consumer Protections for Artificial Intelligence Act, a first-of-its-kind law in the United States.1 The law will regulate the development and deployment of artificial intelligence (AI) starting February 2026. The law's primary focus is regulating and limiting “algorithmic discrimination,” defined as using an AI system to discriminate based on a class protected under either Colorado or federal law.2 Related are “consequential decisions,” which are decisions that have material legal or otherwise significant effects on consumers stemming from a denial of: education enrollment or opportunity; employment or employment opportunity; financial or lending services; essential government services; healthcare services; housing; insurance; or a legal service.3 AI systems that make or assist in making consequential decisions are deemed “high-risk.”4

The law regulates two groups: those that deploy AI systems (“deployers”)5 and those that develop or intentionally and substantially modify AI systems (“developers”).6 Both deployers and developers are charged with a duty of reasonable care to protect consumers from “known or reasonably foreseeable” risks of algorithmic discrimination by high-risk AI systems.7 Deployers and developers who meet their obligations for high-risk AI are presumed to have upheld their duty. Each group has their own obligations.

Deployers' Obligations

Deployers have several requirements to satisfy their obligations for high-risk systems. First, deployers must implement an AI risk management policy that addresses the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.8 Additionally, the policy must be reasonable in light of: (1) the deployer's size and complexity; (2) the nature and scope of the AI system, including its intended use; (3) the sensitivity and amount of data processed by the AI system when deployed; and (4) guidance provided by either the NIST AI Risk Management Framework, ISO/IEC 42001, or an equally stringent framework that is “nationally or internationally recognized.” The risk management policy designed by the deployer can be used for multiple of the deployer's AI systems; a policy per AI system is not necessarily required.9

Second, deployers or third-party contractors must conduct impact assessments based on information “reasonably known by or available to the deployer.”10 An impact assessment must at least include: (1) a statement by the deployer about the AI's (a) purpose, (b) intended uses, (c) deployment context, and (d) benefits; (2) a determination about whether deploying the AI poses known or reasonably foreseeable risks of algorithmic discrimination, and if so, how the AI might discriminate and the steps taken to mitigate this risk; (3) a description of the categories of data processed as inputs by the AI and what the outputs of the AI are; (4) the metrics used to evaluate the AI's performance; (5) known limitations of the AI system; (6) an overview of any measures taken to improve transparency to consumers, including any measures taken to tell consumers when the AI is being used; and (7) an overview of the post-deployment monitoring used and user safeguards implemented, including how issues stemming from deploying the AI are handled.11 Additionally, deployers that customize the AI model with additional data, such as through fine-tuning or using retrieval-augmented generation, must provide an overview of the categories of data used.12

A single impact assessment may be used only for a “comparable set of AI systems” that have been deployed.13 An initial impact assessment must be completed when the AI system is deployed and at least annually thereafter.14 An impact assessment must also be completed within 90 days of making any intentional and substantial change to the AI system.15 This impact assessment must also state the extent that the AI system's current usage is consistent with the developer's intended use.16 An impact assessment conducted to comply with another law or regulation can be used to satisfy these requirements if it is “reasonably similar in scope and effect” to the requirements for Colorado's impact assessments.17 Deployers must hold on to all conducted impact assessments and related records until three years after the final deployment of the AI.18

Deployers must also start conducting annual reviews of each deployed high-risk AI system to ensure that every AI system is not engaged in algorithmic discrimination.19 For systems already deployed, the first review must be completed by February 2026.20

Third, a deployer must provide a clear statement on its website summarizing the types of high-risk AI systems currently deployed; how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination for each deployed AI system; and a detailed description of the nature, source, and extent of information collected and used by the deployer.21 The statement must be routinely updated.22

Fourth, deployers of high-risk AI systems involving consumers must notify consumers that an AI is being used and disclose, in plain language: the nature of the consequential decision being made; a plain-language description of the AI; instructions on how to access the summary posted on their website; and the deployer's contact information.23 If the deployer collects consumers' personal information to make “legal or similarly significant effects,” the notice must also tell consumers how to exercise their state statutory right to opt out of data processing.24

When an AI system makes a decision adverse to the consumer, the deployer must provide an additional notice.25 The notice must state in plain language the reason(s) for the consequential decision, including the degree and manner that the AI contributed to the decision; the type of data used to make the decision; and the source(s) of the data used to make the decision.26 The deployer must provide the adversely affected consumer with the ability to correct any incorrect information and allow the consumer to appeal the decision.27

The deployer must attempt to provide the consumer directly with any required notice in all languages the deployer does business in and in a format accessible to consumers with disabilities.28 If the notice cannot be provided directly, the deployer must make the notice's contents available in a way “reasonably calculated” to inform the consumer.29

Finally, deployers must notify the attorney general within 90 days and without unreasonable delay if they discover that a deployed AI has engaged in algorithmic discrimination.30 The attorney general may request the deployer's risk management policy, impact assessment(s), or records to ensure the deployer complies with the law.31

Small-Deployer Exception

Small-scale deployers are not required to implement a risk management policy program; conduct yearly impact assessments; or publicly disclose information about the data used by the deployer or how the deployer manages known or reasonably foreseeable risks.32 To qualify as small-scale deployer, the deployer must have less than 50 employees throughout the AI system's lifecycle.33 However, small-scale deployers are unregulated only if: the AI is not trained on the small-scale deployer's own data; the AI must be used as intended by the AI's developer; the AI must continue to be trained based on data that is not the small-scale deployer's data; and the small-scale deployer must make any impact assessments completed by the AI's developer34 available to consumers

Developers' Obligations

There are five requirements for developers to meet their obligations for high-risk systems. First, developers must provide other developers and deployers of the AI system with a statement describing the reasonably foreseeable uses of the system and known harmful or inappropriate uses of the system.35

Second, the developer must provide documentation to developers and deployers that discloses: (1) the purpose of the AI; (2) the benefits of and intended uses of the AI; (3) known or reasonably foreseeable limitations of the AI system, such as risks of algorithmic discrimination based on the system's intended uses; (4) a high-level overview of the type of data used to train the AI system; and (5) any other information needed by deployers to complete an impact assessment.36

Third, the developer must produce documents that describe: (1) how the AI's performance was measured and the steps taken to mitigate algorithmic discrimination prior to making the AI available; (2) how the training data was reviewed for potential biases and suitability for training; (3) the AI's intended outputs; (4) the measures taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination; (5) how the AI should and should not be used; (6) how to monitor the AI when it makes consequential decisions; and (7) any other information reasonably necessary for the deployer to understand the outputs and monitor the AI's performance.37 Developers that are also deployers do not have to produce these documents if the AI system is not provided to an entity unaffiliated with the developer-deployer.38 Otherwise, developers that make an AI system available to deployers or other developers must make the documentation and information available that is necessary to conduct an impact assessment.39

Fourth, the developer must display on their website or in a public use case inventory a summary of the types of high-risks systems developed or intentionally and substantially modified that the developer currently makes available, as well as how the developer manages the known or reasonably foreseeable risks of algorithmic discrimination.40 This summary is required to be updated as necessary to be accurate and must be updated within 90 days of any intentional and substantial modification to an AI system.41

Fifth, the developer must inform the attorney general of any known or reasonably foreseeable risks of algorithmic discrimination stemming from the system's intended use case.42 The developer has at most 90 days from either when the developer discovers the AI has caused or is reasonably likely to cause algorithmic discrimination or when the developer received a report of algorithmic discrimination from a deployer.43 The attorney general may request the developer's documentation to ensure compliance with the new law.44

Other Regulations

While the bulk of the law focuses on regulating high-risk AI systems, the law also regulates AI systems generally. Deployers and developers that make any AI system designed for interacting with consumers available must make consumers aware that they are interacting with an AI system, unless a reasonable person would know they are interacting with an AI system.45

Additionally, information protected by state or federal law, such as work product and trade secrets, is not required to be disclosed by deployers or developers of high-risk systems to anyone but the attorney general.46 When the deployer claims that the information requested is protected, they must notify the consumer and explain why the information is protected.47

Enforcement

A violation of the new law is also a violation of Colorado's Unfair and Deceptive Trade Practices Act (“CUDTPA”).48 The attorney general has the exclusive authority to enforce the new AI consumer protections through the CUDTPA.49 However, the law creates an affirmative defense for developers, deployers, and other regulated persons if the violation is discovered and cured based on: solicited feedback; adversarial testing or red teaming; or an internal review and the system is otherwise in compliance with a risk management framework.50 Additionally, the attorney general can promulgate rules related to: requirements and documents for developers; requirements and contents of notices and disclosures; contents and requirements of AI risk management policies and impact assessments; and the requirements to establish the rebuttable presumption of care and to establish the affirmative defense.51

Footnotes

1. https://leg.colorado.gov/bills/sb24-205

2. § 6-1-1701(1)(a)

3. § 6-1-1701(3)

4. § 6-1-1701(9)(a)

5. § 6-1-1701(6)

6. § 6-1-1701(7)

7. § 6-1-1702(1); § 6-1-1703(1).

8. § 6-1-1703(2)(a)

9. § 6-1-1703(2)(b)

10. § 6-1-1703(3)(a)-(b)

11. § 6-1-1703(3)(b)

12. § 6-1-1703(3)(b)(IV)

13. § 6-1-1703(3)(d)

14. § 6-1-1703(3)(a)(I)-(II)

15. § 6-1-1703(3)(c)

16. Id.

17. § 6-1-1703(3)(e)

18. § 6-1-1703(3)(f)

19. § 6-1-1703(3)(g)

20. Id.

21. § 6-1-1703(5)(a)

22. § 6-1-1703(5)(b)

23. § 6-1-1703(4)(a)

24. §§ 6-1-1703(4)(a)(III), -1306(1)(a)(I)(C)

25. § 6-1-1703(4)(b)(I)

26. Id.

27. § 6-1-1703(4)(b)(II)-(III)

28. § 6-1-1703(4)(c)(I)

29. § 6-1-1703(4)(c)(II)

30. § 6-1-1703(7)

31. § 6-1-1703(9)

32. § 6-1-1703(6)

33. § 6-1-1703(6)(a)(I)

34. § 6-1-1703(6)(a)(II), (b), (c)

35. § 6-1-1702(2)(a)

36. § 6-1-1702(2)(b)

37. § 6-1-1702(2)(c)

38. § 6-1-1702(3)(b)

39. § 6-1-1702(3)(a)

40. § 6-1-1702(4)(a)

41. § 6-1-1702(4)(b)

42. § 6-1-1702(5)

43. § 6-1-1702(5)

44. § 6-1-1702(7)

45. § 6-1-1704(1)-(2)

46. §§ 6-1-1702(6), -1703(8)

47. § 6-1-1703(8)

48. § 6-1-105(hhhh)

49. § 6-1-1706(1)

50. § 6-1-1706(3)

51. § 6-1-1707(1)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More