ARTICLE
19 December 2025

AI Bias In Hiring: Algorithmic Recruiting And Your Rights

SH
Sanford Heisler Sharp McKnight

Contributor

Sanford Heisler Sharp McKnight is committed to litigating and resolving public interest, social justice, and civil rights matters that add significant value to individuals and communities across America. We excel at representing individuals, groups of individuals, and public entities in employment discrimination, whistleblower, ERISA, sexual violence, Title IX, victims’ rights, and public sector litigation.
AI hiring tools are changing the hiring process, but they can also replicate bias and discrimination.
United States California Colorado New York Employment and HR
David Tracey’s articles from Sanford Heisler Sharp McKnight are most popular:
  • within Employment and HR topic(s)
Sanford Heisler Sharp McKnight are most popular:
  • within Employment and HR and Criminal Law topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel
  • with readers working within the Law Firm industries

AI hiring tools are changing the hiring process, but they can also replicate bias and discrimination. Learn how algorithmic recruiting affects job applicants, what the law says, and how our firm is leading the fight for equity.

AI Bias in Hiring: What is Happening Now and What Employees Need to Know

At Sanford Heisler Sharp McKnight, we've long represented individuals facing workplace discrimination. Today, one of the fastest-growing battlegrounds is the use of artificial intelligence (AI) in hiring. AI is now a standard part of how companies recruit, evaluate, and hire employees. From resume scanning to automated interviews, AI algorithms help decide who moves forward ­­­­­­­– and who doesn't. These systems are meant to increase efficiency and streamline the hiring process, but they often make decisions without transparency or human review. As a result, AI carries hidden risks, particularly for job seekers from historically marginalized groups.

  1. How are employers using AI in hiring and how common is it?

AI hiring tools are used in nearly every stage of the hiring process. Employers can use AI to:

  • Post and promote job listings online
  • Create application materials
  • Prescreen applicants' resumes
  • Schedule interviews
  • Conduct or analyze video interviews.

This means that a highly qualified candidate who may be missing a few keywords on their resume could be rejected automatically before a human has even so much as glanced at their application.

A 2024 Gallup survey found that 93% of Fortune 500 Chief Human Resource Officers (CHROs) are integrating the use of AI into business practices. Yet only about one third of employees knew that their employer uses AI tools in hiring or management.

  1. Why should job seekers be concerned and what kinds of bias can occur?

Despite claims of neutrality, AI tools can discriminate by reproducing and amplifying human bias. If the data used to train the algorithm reflects biased hiring patterns, such as favoring certain genders, races, ages, or schools, AI tools will replicate those biases. This can result in qualified candidates being filtered out simply because they don't match a profile the algorithm has learned to prefer. Common examples of AI bias include:

  • Favoring men over women for technical or leadership roles
  • Screening out applicants based on race, age, or disability
  • Penalizing gaps in employment history that may be related to caregiving or illness

Recent research confirms these risks:

  • In October 2025, Stanford researchers found that AI resume-screening tools gave older male candidates higher ratings than both female candidates and young candidates, despite all candidates' resumes being generated from the same data.
  • Research published through VoxDev in May 2025 found that AI hiring tools systematically favored female applicants over Black male applicants with identical qualifications.

These patterns mirror the discrimination we see in our litigation. For example, in the case of Huskey v. State Farm Fire & Casualty (filed Dec 14, 2022), our firm is co-lead counsel in a proposed class-action suit against State Farm alleging that its use of algorithmic screening in claims processing disproportionately harmed Black homeowners. This case illustrates that regardless of industry, when algorithms replace human judgment, the same civil-rights rules apply.

  1. Is AI bias illegal and who is liable?

Yes. Employers are responsible for ensuring its hiring practices comply with U.S. employment laws, including:

  • Title VII of the Civil Rights Act (prohibiting discrimination based on race, color, religion, sex, or national origin)
  • The Americans with Disabilities Act (ADA)
  • The Age Discrimination in Employment Act (ADEA)

The U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ) have warned that employers can't outsource responsibility to AI tools. If an AI tool produces discriminatory results, the employer may be held accountable.

This principle is being tested in the class action suit Mobley v. Workday where the plaintiffs allege that the Workday AI screening tools discriminate based on individuals' race, age, and disability. This year, a federal judge expanded the case to include applicants screened by HiredScore AI, another Workday AI tool, and ruled that the AI tools can be considered an "agent" of the employer.

  1. How is AI in hiring regulated?

Recent regulations and legislation related to AI hiring practices:

  • In October 2025, California finalized regulations that provide clarity on how current anti-discrimination laws apply to AI tools that are used in hiring practices.
  • In New York City, a local law is already in effect that requires annual bias audits for automated employment decision tools and public reporting of the results of the audit.
  • The Colorado AI Act, effective June 2026, will require developers and users of AI hiring tools to use reasonable care to prevent algorithmic discrimination.
  1. What Should You Do If You Suspect AI Bias in Your Job Application?

If you suspect you were rejected, not because of your qualifications, but because an algorithm unlawfully disadvantaged you, you deserve accountability:

  • Document everything. Save job postings, emails, assessment reports, timelines.
  • Ask directly whether your application was processed or scored using AI or other automated tools.
  • Speak with our team at Sanford Heisler Sharp McKnight. We have experience litigating algorithmic bias and can evaluate whether you have a claim.

AI in hiring may sound efficient, but efficiency with unlawful bias is still discrimination. At Sanford Heisler Sharp McKnight, we work to ensure that technology does not erode your rights. If you suspect you were rejected, not because of your qualifications, but because a discriminatory algorithm disadvantaged you, you deserve accountability. Reach out today to discuss your situation.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More