ARTICLE
21 November 2025

AI In Hiring: Navigating The Emerging Legal Landscape

OG
Outside GC

Contributor

OGC is a unique law firm that offers the relationship and experience of a traditional law firm with the cost savings and speed of an ALSP. By combining top-notch legal talent and significant business acumen, we deliver the value and efficiency of an in-house lawyer, without adding to our client’s headcount or sacrificing quality.
Artificial intelligence has quickly become an integral part of the hiring process for many employers.
United States California Colorado Illinois New York Technology
Outside GC are most popular:
  • within Transport topic(s)
  • with Finance and Tax Executives
  • with readers working within the Consumer Industries, Insurance and Pharmaceuticals & BioTech industries

Artificial intelligence has quickly become an integral part of the hiring process for many employers. From résumé-screening software to predictive analytics that claim to identify the "best fit" candidate, AI offers a tempting promise: faster, cheaper, and more efficient hiring. But with that promise comes growing concern around the potential legal risks inherent in the use of these tools.

Current Legislative Trends

As more states and localities enact requirements relating to AI in hiring, the validity of these concerns is becoming increasingly clear. Legislators and regulators are recognizing the potential for bias and lack of transparency in automated decision-making systems. For example:

  • New York City: Since July 2023,Local Law 144 has required annual bias audits of automated hiring tools and obligates employers to notify applicants when these systems are used.
  • California: As of October 1, 2025, the California Civil Rights Department's new regulations explicitly make it unlawful under the state's anti-discrimination law for an employer to use an automated decision system (ADS) that discriminates against an applicant or employee on the basis of a protected characteristic, such as age, gender, race, or disability.
  • Illinois: Beginning in January 2026, amendments to the Illinois Human Rights Act will make it an unlawful employment practice to use predictive data analytics (a/k/a AI) in employment decisions without proper notice to applicants and employees, or to use a system that relies on protected class information or ZIP code information (which is a proxy for race) to make employment-related decisions.
  • Colorado: Implementation of a comprehensive AI law passed in 2024 has again been delayed and will take effect on June 30, 2026 at the earliest, with some chance that the state legislature may yet take action to modify some provisions. As presently written, the CO law imposes on employers a duty of reasonable care to protect applicants and employees from any "known or reasonably foreseeable" risk of algorithmic discrimination.
  • Potential Federal Action on AI: A federal moratorium on state-level efforts to regulate AI was withdrawn from the One Big Beautiful Bill Act in July 2025. The Trump administration then released an AI Action Plan that advocates in favor of removing regulatory obstacles to AI development, and in August 2025, issued executive orders, including one focusing on AI procurement and ensuring that AI systems are truthful, neutral, and free from ideological bias. It remains to be seen whether there will be any further legislative attempt to limit state-level regulation or to enact AI laws at the federal level.

What This Means for Employers

The risks to employers are real. Just because an AI tool is popular doesn't mean that it is "safe" or the right fit for your workplace. With a rapidly evolving regulatory landscape — and growing scrutiny from both federal and state agencies — it's important for employers to understand how these tools work, what data they use, and where potential bias or compliance gaps may exist.

One the biggest challenges stems from how AI learns by analyzing historical information. If that data contains bias, an AI-influenced hiring decision could exclude or disadvantage applicants and employees in one or more protected groups, opening the door to discrimination claims. Even without the intent to discriminate—and even when using a third-party vendor—the employer could still face risk of liability. Beyond legal and financial implications, reputational harm can be equally damaging. Headlines about "biased AI" hiring tools can potentially erode trust with employees, candidates, and customers.

A Proactive Approach

Just like any other high-risk area of compliance, employers can better protect themselves with careful oversight, thorough documentation, and strong safeguards, including:

Oversight
Conducting independent bias audits is a critical component of oversight. New York City requires such audits, and, under California's new CRD regulations, evidence of anti-bias testing or other proactive mitigation measures is considered relevant to a party's claims and defenses. Even where not legally required, regular testing of AI hiring tools can demonstrate good faith and, if disparities are found, gives the employer the opportunity to correct disparities before they escalate.

Human oversight is equally important. AI tools should support rather than replace human judgment. Training recruiters and hiring managers on the limitations of these systems and the importance of critically evaluating AI recommendations helps to prevent "rubber-stamping" automated decisions.

Documentation
Updating internal policies and procedures is a key compliance step to consider, including:

  • When and how AI is used;
  • What safeguards are in place;
  • How applicants and employees are notified of an employer's AI policy and decisions affecting them; and
  • Recordkeeping and retention obligations.

HR and management can then be trained on any updates, including notice, reasonable accommodation, and documentation standards.

Vendor Contracts
Many employers rely on third-party AI vendors, but that will likely not insulate employers from risk. Consider asking vendors detailed information about their systems, including how it was trained, what data sets were used, and how it is monitored for bias. Contracts with AI vendors that include clear representations about compliance, audit rights, and indemnity provisions in the event of a legal claim can also be helpful. Employers who negotiate protective language into their agreements may be better positioned if disputes arise.

Notices
Providing notice to applicants and employees about the use of AI tools, and allowing opportunities to review, correct or appeal adverse decisions, may further reduce risk. Transparency can not only build trust, but also can demonstrate a commitment to fairness.

Looking Ahead

As AI-related regulations continue to develop across jurisdictions, employers that adopt a "compliance by design" mindset —building fairness, oversight and transparency into AI systems from the outset—will likely be best positioned to benefit from AI's efficiencies while minimizing compliance risks. Those that take a thoughtful, proactive approach can harness innovation responsibly and may strengthen trust across their workforce.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More