|
Colorado Senate Bill 24-205 ("SB205"), landmark legislation that expressly creates statutory tort liability for AI algorithmic discrimination in the employment context, has passed both houses of the Colorado General Assembly, and is expected to be signed into law by Governor Jared Polis shortly.
While the European Union and several other jurisdictions—Illinois, Maryland, New York City, Portland (Oregon), Tennessee, and Utah—have enacted laws regulating aspects of AI use by businesses, Colorado is the first state that would directly and broadly establish a duty of reasonable care in the development and deployment of AI tools across hiring, employment and various consumer service sectors. If SB205 goes into effect, most employers that use a covered AI tool will be required to take comprehensive compliance steps by February 1, 2026 (the expiration of the compliance grace period). These steps would include implementing an AI risk management policy and program, conducting impact assessments of the AI tool, and providing detailed notices. Despite some notable structural refinements and nuanced exceptions to its general application, the overbroad scope and interpretive ambiguity of SB205 carry the risk of undercutting the process efficiencies that are fundamental to the business case for AI.
Application Scope
As with most (if not all) AI laws, SB205's initial task is defining the technologies and techniques that fall within its scope. Here, SB205 is relatively narrow, limiting itself to machine-based algorithms that use inferential techniques to produce predictions, recommendations, decisions, or content more generally.1 SB205 excludes certain AI-based and other technology tools, ranging from calculators and spreadsheets to firewalls, anti-virus software, and other complex electronic applications. This exclusion applies to AI-powered chatbots,2 which need only disclose, to each individual with whom it interacts, that it is AI-based rather than human. Thus, beyond a straightforward disclosure that is typically already built in, SB205 does not restrict, for example, an employer's use of AI-powered chatbots to help employees understand benefits and other HR-related needs.
SB205 recognizes the possibility that AI can itself be used to counter discrimination, and thus exempts from its scope the use of AI to "identify, mitigate, or prevent discrimination," to "increase diversity or redress historical discrimination" within "an applicant, customer, or participant pool," or "otherwise ensure compliance with state and federal law."
SB205 also recognizes that AI tools may be used to "perform a narrow procedural task," or to "detect decision-making patterns or deviations" without influencing or substituting for human decision-making, and exempts such uses. Further, SB205 does not apply to manufacturing and other production activities, and limits its scope to employment, education, and certain other consequential activities.
Outside those restrictions, however, SB205's purported reach is both ambitious and vague. Pursuant to SB205, an in-scope AI tool is "high-risk" (and thus subject to regulation) if it either "makes," or is a "substantial factor" in making, "a decision that has a material legal or similarly significant effect on the provision [,] denial [,] cost or terms" of any of the following: education, finance, health-care, housing, insurance, legal services, essential government services, and, critically, hiring and employment in general. SB205's definition of "substantial factor" is ambiguous, however, stating only that the AI tool meets that standard if it "assists in making" the decision at issue and is "capable of altering the outcome."
SB205 also fails to define the key standard of "material legal or similarly significant effect," thus creating uncertainty as to what, if any, AI impacts are outside the scope of the law. The Colorado law uses the phrase "legal or similarly significant effect," also used in the European Union's General Data Protection Regulation as well as in many other countries' data protection laws as a threshold for determining whether certain AI-related compliance requirements have been triggered. Consequently, the interpretation of these foreign laws by foreign regulators and courts may in time influence the Colorado attorney general's own interpretation, application, and enforcement of SB205.3
Compliance Requirements
What SB205 makes clear, however, is that with few exceptions, those who build or modify "high-risk" AI tools ("Developers") and those who use "high-risk" AI tools ("Deployers") owe a duty of "reasonable care" to all Colorado residents, to protect them from "any known or reasonably foreseeable risks" of AI-driven algorithmic discrimination. Developers and Deployers must, therefore, protect Coloradans from discrimination, i.e., "differential treatment or impact," on the basis of a particularly broad list of protected categories: "actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law."
Compliance with this duty is onerous, requiring extensive and potentially overwhelming transparency, notices, analysis, and documentation.4 The duty applies to all Developers and Deployers "doing business in Colorado," with one exception5 relating to employers with 50 or fewer employees.
Developers and Deployers must document their efforts to analyze and mitigate the potential discriminatory impact of each AI tool. To that end, Developers must provide Deployers with certain information:
- documentation of the tool's purpose, intended benefits, "reasonably foreseeable uses and known harmful or inappropriate uses," and all "known or reasonably foreseeable limitations" or risks;
- high-level summaries of the tool's training data;
- descriptions of the tool's anti-bias testing, data governance measures, and intended output;
- steps taken to mitigate the risk of discrimination resulting from future use of the tool; and
- instructions on how to use (and not use) the tool, and how to monitor its operation.
Based in part on this information, Deployers must, in turn, conduct an "impact assessment" of the AI tool in question. This impact assessment must include or describe:
- the purpose, intended use cases, benefits, and deployment context of the tool;
- an analysis of whether the use of the tool "poses any known or reasonably foreseeable" discrimination risk, what risks are posed, and what mitigating steps have been taken;
- the tool's input data categories, its outputs, and an overview of any categories of data used to customize the off-the-shelf version of the tool;
- the tool's known limitations, performance metrics used to evaluate the tool, and a description of transparency measures taken to ensure notice of the use of the AI tool;
- the "post-deployment monitoring and user safeguards" instituted by the Deployer; and
- if the Deployer has made a "deliberate change" to the tool that "results in any new reasonably foreseeable risk" of discrimination, a statement detailing the extent to which the tool is being used in a manner that varies from the Developer's intended uses for that tool.
Deployers may hire a third party to conduct the impact assessment. Impact assessments must be conducted annually at minimum and within 90 days of any "deliberate change" described above. The same 90-day update window also applies to Developers who make a similar "deliberate change" resulting in new risks.
Of possible assistance to multi-jurisdictional employers is the concession that a single impact assessment may cover multiple "comparable" AI tools. In addition, a Deployer may use the same impact assessment that it developed to comply with another applicable law or regulation if that impact assessment is "reasonably similar in scope and effect" to the assessment required by Colorado's law. This suggests that an employer might use the same impact assessment to comply with multiple laws, such as the EU AI Act, China's Personal Information Protection Law, and the California Consumer Privacy Act (once the latter's regulations on automated decision-making are issued). Moreover, the option to comply with other legal requirements may provide some leeway to deviate from the highly prescriptive impact assessment requirements described above.
Deployers must also implement a Risk Management Policy and Program (RMPP). Akin to a detailed AI use policy, the RMPP must set up a risk management framework for AI tool use, and "specify and incorporate the principles, processes, and personnel that the Deployer uses to identify, document, and mitigate known or reasonably foreseeable risks" of discrimination from AI tool use. Of some relief to smaller employers, the RMPP may take into account the size and complexity of the Deployer, as well as the nature and scope of the AI system, and sensitivity and volume of data.
Developers and Deployers also have an obligation to report certain incidents to the state's attorney general (AG). For example, a Deployer must inform the AG any time it detects an instance of an AI tool causing a discriminatory outcome. A Developer, on the other hand, must inform the AG of any newly discovered discrimination risk detected through ongoing testing and analysis, or of which a "credible report" has been received from a Deployer. The Developer must also provide notice of these new risks to all known Deployers and other Developers of the AI tool in question.
Notice Requirements and Individual Rights
SB205 requires extensive and comprehensive notices to individuals about the use of high-risk AI tools. These notices must come in two forms—one published on the business's publicly available website and one provided "directly" to Colorado residents. Deployers must provide an additional notice to Colorado residents who are subject to a consequential adverse decision, such as denial of employment, made by or with the assistance of an AI tool.
Developers and Deployers must publish, online, a summary of the "high-risk" AI tools that they have developed or deployed, respectively, and how they "manage[] known or reasonably foreseeable" risks of algorithmic discrimination. Deployers must also publish "in detail, the nature, source, and extent of the information collected and used" by them (presumably with respect to AI tool inputs and use).
Of greater business impact is SB205's requirement that Deployers notify each Coloradan subject to the use of a "high-risk" AI tool that will be a substantial factor in making a consequential decision about the individual. The notice must also include:
- the purpose of the AI tool, and a plain-language description of the tool;
- the nature of the underlying "consequential decision" being made by the AI tool;
- Colorado residents' right to opt out of any "profiling in furtherance of decisions that produce legal or similarly significant effects";
- the Deployer's contact information; and
- instructions on how to access the online summary of "high-risk" AI tool use posted on the Deployer's website.
The right to opt out of profiling derives from the Colorado Privacy Act, which defines "profiling," as applied to the employment context, as the use of an AI tool to evaluate, analyze, or predict aspects of a Colorado resident's health, reliability, behaviors, location or movement.
SB205's greatest business impact may be the requirement that Deployers provide notice if the "consequential decision" being made is adverse to an individual. In this adverse action notice, the Deployer must explain in some detail (a) the reasons for the adverse decision, (b) the impact of the AI tool on the decision, (c) the data used by the tool in making (or assisting with) the decision, and (d) the sources of those data. The Deployer must then provide the individual both an opportunity to "correct any incorrect personal data" that was used by the AI tool and an opportunity to appeal the adverse decision—including, "if technically feasible," human review of the decision. This notice-and-appeal process could potentially eliminate all of the speed and efficiency gains that, in part, make AI tools worthwhile to invest in, develop, and adopt.
SB205 requires that all these statements and notices be provided "in all languages" used by the Deployer "in the ordinary course of the Deployer's business," and in a format that is accessible to those with a disability.
Enforcement and Defenses
Violations of SB205 are deemed to constitute unfair and deceptive trade practices. Enforcement authority is lodged exclusively with the Colorado AG. The law specifically denies a private right of action.
SB205 provides some measure of safe harbor, in the form of a rebuttable presumption of reasonable care, to Developers and Deployers who comply with SB205 and its implementing regulations. SB205 also provides an affirmative defense to a Developer or Deployer that "discovers and cures a violation" based on feedback from another source, adversarial/red-team testing, or an internal review process, provided that they are otherwise in compliance with an acceptable risk management framework (such as the National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework).
SB205 confers on the AG the authority to promulgate implementing regulations, including frameworks for the various documents, notices and disclosures required, specifications for impact assessments and RMPPs, and the contours of the rebuttable presumption and affirmative defense afforded Developers and Deployers.
Employers can reasonably expect active enforcement. The Colorado attorney general has taken aggressive steps to enforce similar consumer protection laws, such as the array of new data privacy laws enacted in the state in recent years.
Conclusion
Owing to the heavy burdens it would impose on employers, SB205 will be a powerful disincentive to use AI tools in Colorado, whether to evaluate applicants, employees, and perhaps even contract workers, or for other purposes beyond the employment context. In the modern era of so-called wandering workers and remote work, SB205 may also have extra-territorial impact, as employers will be hard-pressed to determine whether a particular applicant or other individual subject to that employer's AI tool use is a Colorado resident.
Employers will find the notice requirements particularly onerous because of the level of detail required and the practical challenges of describing AI tools in plain language, explaining how the employer manages an AI tool's known or reasonably foreseeable risks, determining the degree to which an AI tool contributes to a consequential decision, and other such points, not to mention the dollar-and-time costs of setting up these additional processes, conducting impact assessments, and implementing risk-management programs.
Moreover, the right to opt out of "profiling," given its broad definition under Colorado law, may severely restrict the benefit employers can receive from the efficiencies of AI tools because they have to develop parallel non-AI systems for those who opt out.
Perhaps the most burdensome impact of SB205 may be that its individual appeal processes could overwhelm an entire hiring operation. Where AI tools are effectively considering thousands of applicants for each opening, for example, every unsuccessful applicant could seek human review.
Given the worldwide interest in the risk-based model of the EU AI Act, we may see additional states follow Colorado's example by seeking to enact legislation with a similar framing of risk, Developer-Deployer separation of duties, and pre-use impact assessments. How the SB205 model fares in practice will warrant close scrutiny both inside and outside Colorado.
Footnotes
1 This is in stark contrast to numerous other bills and laws whose broad definitions sweep in everything from complex machine-learning models to simple, non-AI computerized processes, and in some cases, potentially even human-driven paper-and-pencil methodologies.
2 SB205 states that any technology that communicates "in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions" is excluded as long as it is subject to an accepted use policy that prohibits the generation of discriminatory or harmful content. (See § 6-1-1601(9)(b)(ii)(R).)
3 See, e.g., Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119), Art. 22.
4 SB205 does not, however, require the disclosure of trade secrets. See §§ 6-1-1602(5), 6-1-1603(8).
5 Deployers with 50 or fewer employees qualify for an exemption from most of SB205's notice and disclosure requirements if they use an off-the-shelf AI tool in its intended manner, so long as the Developer of that tool provides the Deployer with a sufficient impact assessment, and the tool does not train on the Deployer's data. (See § 6-1-1603(6).)
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.