The role of artificial intelligence (AI) has been increasing in our daily lives, from customer service chatbots and digital assistants to ubiquitous smart home devices. Likewise, the reach of the technology is expanding quickly into human resources. Many companies use AI to track employee performance: For example, AI can be used to evaluate employees' sales performance through electronic data or assess project outcomes. It is more common than ever for employers to utilize AI tools to screen applicants or promotion candidates, including by reviewing resumes and applications. The EEOC reports that up to 99% of Fortune 500 companies use some form of AI to screen or rank candidates for hire. While these tools can often save time and streamline information access, some uses are coming under scrutiny by federal agencies (such as the Department of Labor) and state legislatures.
Colorado entered the chat on May 17, 2024, when Governor Jared Polis signed the Colorado Artificial Intelligence Act (CAIA). Set to go into effect on February 1, 2026, the law broadly addresses the use of AI in consumer settings, including employment. The CAIA includes provisions that require both developers and deployers of AI tools to use reasonable care to avoid discrimination through the use of "high risk" AI systems. To understand the considerations, some statutory definitions are helpful:
- Deployers includes those doing business in Colorado utilizing a "high risk" AI system
- A "high risk" system is any AI system that "makes, or is a substantial factor in making, a consequential decision," including decisions with respect to employment or employment opportunities
- A "substantial factor" refers to the use of an AI system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a "consequential decision" regarding that consumer
- "Consumer" means an individual who is a resident of Colorado
The CAIA applies to all Colorado employers, exempting only those with fewer than 50 employees that do not use their own information or data to teach or improve the AI system in use, or those who utilize systems that meet certain criteria. One goal of the law is to ensure that AI algorithms do not return discriminatory results based on actual or perceived age, color, disability, ethnicity, or other protected characteristics. Users of AI systems in recruiting and hiring are charged with using reasonable care to protect applicants and candidates from any known or reasonably foreseeable risks of discrimination. While this is in line with current laws requiring employers to use nondiscriminatory hiring practices, the CAIA goes further, requiring employers who use AI in this process to take certain affirmative steps.
- Specifically, the CAIA imposes a number of action items on deployers who use AI and are not otherwise exempt. Some of these more prominent obligations include requirements that a deployer implement a risk-management policy and program to govern the use of the AI system. The policy must be regularly reviewed and updated.
- Complete an impact assessment for the AI system.
- Notify the consumer that AI is being used to make a consequential decision before the decision is made and disclose the purpose of the system, contact information for the deployer, a description of the program, and how to access additional information.
- Notify the attorney general within 90 days if it discovers the algorithms used by the AI system cause a discriminatory result.
In the event of an adverse action to the consumer, the deployer must, among other things:
- Disclose to the consumer the principal reason(s) for the
decision, including:
- The degree to which and the manner in which the AI system contributed to the decision
- The type of data that was processed by the AI system relating to the decision
- The source of the data considered
- Provide an opportunity to correct any incorrect data used by the AI system
- Offer an opportunity to appeal the decision using human review
Importantly, where a deployer complies with its obligations under the CAIA, it will be entitled to a rebuttable presumption that it did, in fact, use reasonable care to protect consumers from foreseeable risks of discrimination through the AI system.
The CAIA also provides for public enforcement mechanisms; however, the law expressly provides that individuals do not have a right to sue for violations of the CAIA. Colorado's attorney general is charged with enforcing the statute. Deployers who act promptly to cure violations and generally maintain systems in compliance with generally accepted AI risk management practices may have an affirmative defense to enforcement actions. When challenged, the deployer bears the burden of showing compliance with these defenses.
While the law is slated to go into effect in early 2026, it is still subject to further rulemaking. Governor Polis — who notably was a dot.com entrepreneur, having founded an internet access provider and several well-known online retailers before entering politics — also advised the legislature that the long lead time before the law takes effect is intended to permit the government to further consider and refine the statute, which is inherently complicated both in technical compliance and with respect to navigating a potentially complex landscape of national regulation.
Employers who use AI tools in recruiting or employing Colorado residents should use the next 18 months to fine-tune their systems, approaches, and processes for avoiding discrimination through technology. Time flies, especially when compliance deadlines loom. Taking a proactive view will help avoid potential AI pitfalls in recruiting.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.