Artificial intelligence continues revolutionizing HR and talent acquisition, promising efficiency and scalability in hiring processes. However, as a recent lawsuit against Workday shows, automation does not absolve employers or their vendors of compliance with anti-discrimination laws.
In Mobley v. Workday, Inc., Case No. 23-CV-770, a California federal judge allowed a collective action age discrimination lawsuit to proceed against Workday, Inc. (Workday), a leading provider of AI-powered applicant tracking systems (ATS). The suit alleges that Workday's AI-based tools disproportionately screened out older job applicants in violation of the Age Discrimination in Employment Act (ADEA) and California's Fair Employment and Housing Act (FEHA). The court's ruling marks a critical moment in the legal scrutiny of algorithmic hiring tools.
Background
Derek Mobley, a job applicant over 40, asserts that he applied to more than 100 jobs with companies that use Workday's AI-based hiring tools and that he was rejected every single time. Workday's AI-based hiring tools include personality and cognitive tests and interpret a candidate's qualifications through algorithmic methods that can automatically reject or advance candidates.
Workday sought to have the case dismissed and argued it was not the employer making the actual employment decisions. In turn, plaintiffs argued that Workday is “a joint employer” or otherwise liable for building and operating the discriminatory screening tools that relied on data and logic to screen out candidates, which had an adverse impact on older applicants by considering years since graduation, gaps in employment, and other factors that allegedly correlate with age.
The judge's decision to allow the case to proceed as a “collective action” and under the disparate impact theory under the ADEA significantly increases its reach and implications, opening the door to other similarly situated applicants to join the suit. Disparate impact refers to an employment practice or policy that is facially neutral, i.e., it does not explicitly mention age, but has a disproportionately adverse effect on individuals over the age of 40, which cannot be justified by business necessity.
Key Legal Takeaways
Vendor liability under employment law is gaining traction. The Workday case may expand the traditional scope of liability in employment discrimination law. Plaintiffs are pushing courts to hold vendors accountable when their AI tools are essential to the employment decision-making process, even if the vendors do not make final hiring decisions.
Algorithmic bias is real and actionable. Plaintiffs allege that Workday's tools used proxies that served as stand-ins for age, such as experience level, education dates, and employment gaps. The court was persuaded that these allegations were sufficient to survive a motion to dismiss, which signals that using neutral criteria does not protect companies from disparate impact claims.
Class and collective actions are viable against tech providers. This case mirrors a growing trend in which courts are willing to consider collective actions where AI tools operate at scale and impact large numbers of job applicants. This drastically raises potential exposure for employers and tech vendors alike. In fact, since the Workday suit, other tech vendors, including Intuit and HireVue have been sued for their allegedly biased AI hiring technology that the ACLU claims excludes and disadvantages disabled and minority applicants.
Employers cannot delegate compliance to vendors. Even if the liability against Workday is upheld, this does not absolve employers from their duty to vet and monitor the tools they use. Employers are ultimately responsible for ensuring their hiring practices are non-discriminatory, even when using third-party technology.
Screening algorithms must be audited for bias. Legal experts and regulators increasingly expect that AI hiring tools undergo pre- and post-deployment testing for bias. Employers should document these efforts and consider retaining outside experts to analyze disparate impact.
This is just the beginning. The EEOC and state regulators have signaled increased scrutiny of AI-based hiring tools. Illinois and New York have already enacted laws requiring disclosures and consent for AI video analysis or decision-making tools. A wave of regulatory enforcement and litigation is likely.
Impact on Disparate Impact. In April of 2025, President Trump signed an Executive Order directing federal agencies to eliminate enforcement based on disparate impact, which will no doubt reduce investigations into algorithmic discrimination. This does not impact private litigation, but the bar on federal involvement may spur state agencies to increase enforcement. As a result, while there will be fewer federal enforcement actions, there may be an uptick in state agency enforcement and private suits focusing on AI tools under a disparate impact theory.
Best Practices for Employers and HR Teams
To reduce the risk of becoming the following headline, employers should:
- Conduct AI bias audits before deploying any automated decision-making tools, especially tools that use metrics that are not clear or that rely on vague criteria.
- Ensure vendor contracts include representations and warranties about EEO compliance and indemnification provisions for AI tool bias claims.
- Monitor rejection rates by age, race, and gender to identify adverse impacts.
- Document the criteria used and reasons for each hiring decision.
- Train HR teams to understand how AI decisions are made and when to override them.
- Document human review processes in any hiring stage involving automation.
- If possible, establish a company-wide AI governance program to address the uses of AI across all aspects of the business, including hiring.
Final Thoughts
The Workday lawsuit is a wake-up call for employers relying on algorithmic tools to streamline hiring. While AI holds great promise, it must be implemented with intentionality, oversight, and legal compliance. Employers cannot afford to take a hands-off approach. Now is the time to scrutinize your tech stack, engage legal counsel, and ensure your use of AI is legal and equitable.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.