Duane Morris Takeaways: On October 1, 2025, California's "Employment Regulations Regarding Automated-Decision Systems" will take effect. These new AI employment regulations can be accessed here. The regulations add an "agency" theory under the California Fair Employment and Housing Act (FEHA) and formalize this theory's applicability to AI tool developers and companies employing AI tools that facilitate human decision making for recruitment, hiring, and promotion of job applicants and employees. With California's inclusion of a private right of action under the FEHA, these new AI employment regulations may augur an uptick in AI employment tool class actions brought under the FEHA. This blog post identifies key provisions of this new law and steps employers and AI tool developers can take to mitigate FEHA class action risk.
Background
In the widely-watched class action captioned Mobley v. Workday, No. 23-CV-770 (N.D. Cal.), the plaintiff alleges that an AI tool developer's algorithm-based screening tools discriminated against job applicants on the basis of race, age, and disability in violation of Title VII of the Civil Rights Act of 1964 ("Title VII"), the Age Discrimination in Employment Act of 1967 ("ADEA"), the Americans with Disabilities Act Amendments Act of 2008 ("ADA"), and California's FEHA. Last year the U.S. District Court for the Northern District of California denied dismissal of the Title VII, ADEA, and ADA disparate impact claims on the theory that the developer of the algorithm was plausibly alleged to be the employer's agent, and dismissed the FEHA claim which was brought only under the then-available theory of intentional aiding and abetting (as we previously blogged about here).
In recent years, discrimination stemming from AI employment tools has been addressed by other state and local statutes, including Colorado's AI Act (CAIA) setting forth developers' and deployers' "duty to avoid algorithmic discrimination," New York City's law regarding the use of automated employment decision tools, the Illinois AI Video Interview Act, and the 2024 amendment to the Illinois Human Rights Act (IHRA) to regulate the use of AI, with only the last of these laws providing for a private right of action (once it becomes effective January 1, 2026).
Key Provisions Of California's AI Employment Regulations
California's AI employment regulations amend and clarify how the FEHA applies to AI employment tools, thus constituting a new development in case theories available to class action plaintiffs regarding alleged harms stemming from AI systems and algorithmic discrimination.
Employers and AI employment tool developers should take note of key provisions codified by California's new AI employment regulations, as follows:
- Agency theory. An "agency" theory is added under the FEHA like the one that allowed the plaintiff in Mobley v. Workday to proceed past a motion to dismiss on his federal claims, whereby an AI tool developer may face litigation risk for developing algorithms that result in a disparate impact when the tool is used by an employer. While Mobley v. Workday continues to proceed in the trial court, no appellate authority has yet had occasion to address the "agency" theories being litigated in that case under federal antidiscrimination statutes. However, with the California AI employment regulations taking effect October 1, 2025, that theory is now expressly codified under the FEHA. 2 Cal. Code Regs § 11008(a).
- Proxies for discrimination. The regulations clarify that it is unlawful to use an employment tool algorithm that discriminates by using a "proxy," which the regulations define as a "characteristic or category closely correlated with a basis protected by the Act." Id. §§ 11008(a), 11009(f). While the regulations do not explicitly identify any proxies, proxies that have been identified in literature by the EEOC's former Chief Analyst include zip code (this proxy is also codified in the IHRA), first name, alma mater, credit history, and participation in hobbies or extracurricular activities.
- Anti-bias testing. The regulations state that relevant to a claim of employment discrimination or an available defense are "anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such efforts, the results of such testing or other effort, and the response to the results." Id. § 11020(b). Thus, for example, adoption of the NIST's AI risk management framework, itself codified as a defense under the CAIA, could be a factor to consider as a defense under the FEHA. Many other factors are pertinent with respect to anti-bias testing, including auditing, tuning, and the use of various interpretability methods and fairness metrics, discussed in our prior blog entry and article on this subject (here).
- Data retention. The regulations provide that employers, employment agencies, labor organizations, and apprenticeship training programs must maintain employment records, including automated-decision data, for a minimum of four years. Id. § 11013(c).
Implications For Employers
California's AI employment regulations increase employers' and AI tool developers' risks of facing class action lawsuits similar to Mobley v Workday and/or alleging discrimination under the FEHA. However, developers and employers have several tools at their disposal to mitigate AI employment tool class action risk. One is to ensure that AI employment tools comply with the FEHA provisions discussed above and with other antidiscrimination statutes. Others include adding or updating arbitration agreements to mitigate the risks of mass arbitration; collaborating with IT, cybersecurity, and risk/compliance departments and outside advisors to identify and manage AI risks; and updating notices to third parties and vendor agreements.
Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.