Seyfarth Synopsis: Last week, the US Equal Employment Opportunity Commission ("EEOC") filed a settlement agreement in a lawsuit that many are calling the EEOC's "first-ever" artificial intelligence discrimination in hiring lawsuit. The settlement serves as a strong reminder of the EEOC's ongoing emphasis on AI and algorithmic bias, and a reminder to employers that the results of any technology-assisted screening process should comply with existing civil-rights laws. In this alert, Seyfarth discusses key takeaways from this settlement for all employers, regardless of whether their hiring technology might be characterized as an "artificial intelligence" tool.
The EEOC's lawsuit, against iTutor Group and its related companies ("iTutor"), involved an employer that hired thousands of tutors in the United States each year to provide online tutoring from their homes or other remote locations. Under the EEOC's proposed consent decree filed last week, the employer will pay $365,000 to the approximately 200 people who applied for a job in March and April 2020 and who were purportedly rejected because of their age.
While multiple media reports have characterized the EEOC's iTutor lawsuit as a case involving artificial intelligence, the EEOC's complaint only alleged that the online job application system requested dates of birth and that the application software automatically rejected female applicants age 55 or older and male applicants age 60 or older. While the EEOC's complaint and proposed consent decree did not expressly reference artificial intelligence or machine learning, the EEOC's press release linked the case to its recent Artificial Intelligence and Algorithmic Fairness Initiative as an example of the types of technologies that the EEOC is interested in pursuing.
To be clear, automatically rejecting older job applicants, when their birthdates are already known, does not require any sort of artificial intelligence or machine learning. However, it is entirely fair to say that the EEOC's complaint and positioning on the allegations squarely falls within the broader scope of its greater scrutiny of all sorts of technology in hiring, and not just "artificial intelligence."
EEOC's iTutor settlement provides an important reminder about how employers must continue to scrutinize their use of any technology, including those that align more closely to "algorithmic fairness," in this rapidly developing area, given the broader context and scope of the EEOC's ongoing efforts in this area and attendant media coverage.
Key Takeaway
The iTutor settlement, and the EEOC's ongoing emphasis in the area of AI and algorithmic bias, serves as a strong reminder to employers that the results of any technology-assisted screening process should comply with existing civil-rights laws. This reminder applies to both complicated and simple technology. It applies whether an employer is using cutting-edge artificial intelligence products or if its recruiters are simply setting filters on a spreadsheet. A robust compliance and risk management program should periodically evaluate how technology, both sophisticated and simple, is being used in the hiring process to ensure compliance and manage other risks.
Recent Settlements and Enforcement Actions Reach More Than Just Artificial Intelligence
The EEOC's complaint against iTutor focused on the employer's alleged use of straightforward technology in the context of hiring and job applications. While few employers would characterize the basic technology used by iTutor as "artificial intelligence," the alleged conduct unquestionably falls into a broader category of violations of existing civil rights laws enabled by technology. The EEOC's scrutiny of application tracking systems follows similar settlements involving employers using these systems in ways that allegedly violated existing civil-rights laws.
In 2022 and 2023, the US Department of Justice Civil Rights Division's Immigrant and Employee Rights Section ("IER") reached settlements with 30 employers, assessing combined civil penalties of over $1.6 million, over the employers' use of a college recruiting platform operated by the Georgia Institute of Technology ("Georgia Tech"). The first complaint to IER was by a student who was a lawful permanent resident, who observed that an employer's paid internship posting on the platform was available only to U.S. citizens. IER's subsequent investigation identified dozens more facially discriminatory postings on the site. IER's announcement of the settlement confirmed that the website allowed employers to post job advertisements that deterred qualified students from applying for jobs because of their citizenship status, and in many cases also blocked otherwise eligible students from applying, all in violation of the immigration law.
Similarly, on March 20, 2023, the EEOC announced a settlement with a job search website operator. The underlying charge alleged that the website's customers were posting job ads that discouraged US citizens from applying. The EEOC's conciliation agreement required the website operator to "scrape" the website for potentially discriminatory keywords such as "OPT", "H1B" or "Visa" that appeared near the words "only" or "must" in new job postings, in an effort to prevent discriminatory job postings. In other words, the EEOC's conciliation agreement required the operator to implement a simple keyword filter in an effort to identify potentially discriminatory job postings.
While none of these examples above involve the use of any artificial intelligence, like the EEOC's iTutor settlement, they unquestionably fall under the broader umbrella of "algorithmic fairness." In October 2021, EEOC Chair Charlotte Burrows announced the EEOC's "Artificial Intelligence and Algorithmic Fairness Initiative". Her joint statement of April 25, 2023, joined by the heads of the Consumer Financial Protection Bureau, Federal Trade Commission, and Department of Justice Civil Rights Division, emphasizes the agencies' concern about "harmful uses of automated systems", not just artificial intelligence. And the EEOC's draft Strategic Enforcement Plan, published in the Federal Register on January 10, 2023, indicates an enforcement focus on all "automated systems" used in hiring, not just systems that could be characterized as "artificial intelligence".
Unquestionably, many employers are already using (and others are contemplating using) artificial intelligence as part of their hiring and other HR processes. The EEOC's iTutor complaint, combined with its ongoing focus and outreach in this area, means that employers' use of any technology, and not just technology characterized as "artificial intelligence," is receiving increased scrutiny.
Whether or not technology is properly characterized as "artificial intelligence," asserting, "The technology forced me to discriminate" will never be an effective affirmative defense to an EEOC charge or lawsuit. The EEOC's iTutor settlement should serve as a reminder that a robust compliance and risk management program should periodically assess and test compliance and other risks regarding how technology, both sophisticated and simple, is being used in the hiring process. Given the attention that technologies are receiving from the EEOC and other agencies, we anticipate seeing a significant rise in charge filings, investigations and litigation relating to these issues.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.