The last several years have seen algorithm-driven technologies like artificial intelligence (AI) and other algorithmic or automated decision-making systems proliferate throughout nearly every industry, from managing the supply chain to detecting and preventing fraud and helping farmers decide what crops to plant. Employment-related decisions are no exception; employers are increasingly using algorithmic decision-making systems in the hiring and evaluation of employees, with an eye to eliminating bias and discrimination in these processes. Employers may be surprised to learn, however, that these tools may perpetuate the very discrimination they are supposed to address, especially against persons with disabilities. That is why the Equal Employment Opportunity Commission (EEOC) recently issued its first-ever guidance on important considerations and best practices to help employers ensure that the algorithm-driven technologies they use do not violate Title I of the Americans with Disabilities Act (ADA) by unlawfully disadvantaging applicants or employees with disabilities. In this article, we provide the key takeaways from the EEOC's guidance.

The EEOC's Guidance Is Broad in Scope

The EEOC's guidance, though non-binding, is intended to cover a wide range of technologies that use algorithmic decision-making or AI.

It is important, however, for readers to understand what algorithms, algorithmic decision-making, and AI are. The EEOC's guidance defines an algorithm as "a set of instructions that can be followed by a computer to accomplish some end." Algorithmic decision-making, then, generally refers to automated decisions based on a computer's processing of certain data according to the instructions it is given. As a simple example, in the hiring context, an organization desiring to hire candidates with certain minimum grades may configure its applicant-management system to automatically reject all applicants who do not meet that threshold. The EEOC's guidance notes algorithmic decision-making systems include "automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software."

Similarly, the EEOC's guidance defines AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments." The guidance explains that, in the employment context, "AI has typically meant that the developer relies partly on the computer's own analysis of data to determine which criteria to use when making employment decisions." Returning to the above example, instead of the organization giving the applicant-management system explicit instructions to reject certain applicants, the system's AI may learn that an applicant's grades are the best indicator of who to reject; or it may learn to reject applicants based on the school they attended or any one or more of myriad other criteria. Examples noted in the EEOC's guidance include, among other things, software that monitors employees and rates them based on their keystrokes, video interviewing software that evaluates candidates based on their facial expressions and speech patterns, and testing software that generates "job fit" or "cultural fit" scores for applicants or employees.

Employers Are Responsible for Their Technologies and Vendors

Under the EEOC's guidance, an employer can be liable for violating the ADA if a technology it uses in making employment-related decisions discriminates against individuals with disabilities, even if the system is designed or administered by another entity. Likewise, employers are also responsible for ensuring that individuals with disabilities receive reasonable accommodations, even if the accommodation request is made directly to an external entity that administers the system.

Common Ways Algorithmic Decision-Making Tools May Violate the ADA

The EEOC's guidance provides a non-exhaustive list of three common ways in which an employer's use of algorithmic decision-making tools can violate the ADA:

  1. Failing to provide reasonable accommodations to individuals with disabilities: As with other assessments, an employer must provide a reasonable accommodation to an applicant or employee with disabilities where one is necessary for the individual "to be rated fairly and accurately by the algorithm," unless doing so would involve significant difficult or expense (i.e., an undue hardship). The EEOC notes, for example, that an employer may have to provide a reasonable accommodation to an applicant whose limited dexterity is a barrier to taking a knowledge test that requires the use of a keyboard, trackpad, or other input device, such as by allowing the applicant to provide responses orally or by providing an alternative test if the original test cannot be made accessible.
  2. "Screening out" individuals with disabilities: An employer's use of algorithmic-driven tools violates the ADA if the tool "prevents a job applicant or employee from meeting—or lowers their performance on—a selection criteria, and the applicant or employee loses a job opportunity as a result," despite otherwise being able to perform the essential functions of the job with a reasonable accommodation. In such a situation, the employer has violated the ADA even if the "screening out" was unintentional or inadvertent. For example, video interviewing software that analyzes an individual's speech patterns might screen out an individual with a speech impediment; or a test intended to measure an individual's focus might screen out an individual whose disability impacts their ability to ignore distractions.
  3. Making a disability-related inquiry or seeking information that qualifies as a medical examination: Under the ADA, an employer may not ask an applicant or employee questions that are likely to elicit information about a disability or seek information about an individual's physical or mental impairments or health. The EEOC clarifies, however, that algorithmic decision-making tools may lawfully pose a question to an applicant or an employee that "might somehow be related to some kinds of" disabilities. For example, asking whether the individual is "described by friends as being 'generally optimistic'" does not violate the ADA. But even though such questions do not violate the ADA's restrictions on disability-related inquiries and medical examinations, they could still violate the ADA in other ways; in the above example, the employer would violate the ADA if the individual is "screened out" because they answered negatively due to their having Major Depressive Disorder.

Best Practices Recommendations

The EEOC's guidance provides best practices for employers to avoid discriminating against individuals with disabilities and thus reducing their risk of liability. Among other things, such best practices include:

  1. Making the evaluation and accommodations processes transparent: The EEOC recommends ensuring that instructions for requesting an accommodation are easy to find and easy to follow. Additionally, to assist applicants and employees in knowing if they will need an accommodation, the EEOC encourages employers to "tell applicants and employees what steps an evaluation process includes," including by "[d]escribing, in plain language and in accessible formats, the traits that the algorithm is designed to assess, the method by which those traits are assessed, and the variables or factors that may affect the rating." Notably, Illinois employers who use AI to analyze video interviews are already required to notify applicants beforehand of how the tool works and the characteristics that will be used to evaluate them. And effective January 1, 2023, employers in New York City will likewise be required to explain to applicants and employees how their algorithmic decision-making tools work and the qualifications and characteristics the employer is using them to measure.
  2. Narrowly tailoring the use of algorithmic decision-making tools: To minimize the chances of unlawfully screening out individuals with disabilities, the EEOC recommends that employers should use the algorithmic decision-making tools to measure only those abilities or qualifications that "are truly necessary for the job—even for people who are entitled to an on-the-job reasonable accommodation." The EEOC also recommends that employers ensure that "necessary abilities or qualifications are measured directly, rather than by way of characteristics or scores that are correlated with those abilities or qualifications."
  3. Confirming compliance with the tool's vendor: The EEOC recommends that an employer using an algorithmic decision-making tool designed or administered by a vendor "confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual's physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation." Notably, effective January 1, 2023, New York City employers will be required to conduct independent bias audits to ensure that the algorithmic decision-making tools they use do not adversely impact applicants and employees on the basis of race, ethnicity, or sex.

Future Regulation of Algorithmic Decision-Making Tools

This guidance and other actions by the EEOC indicate that it will be keeping an eye on other forms of discrimination that may result from the use of algorithmic decision-making tools in employment-related decisions, especially given past reports that automated hiring tools previously discriminated against applicants based on their gender and race. For example, on May 5, 2022, only a week before the releasing its guidance on algorithmic decision-making, the EEOC filed a complaint in the Eastern District of New York alleging that iTutorGroup, Inc., a software company that provides online English-language tutoring to individuals in China, violated the Age Discrimination in Employment Act by using job application software that automatically disqualified applicants if they were over a certain age. EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565-PKC-RLM (E.D.N.Y.). iTutorGroup has not yet filed a response to the complaint.

Modern technologies can help improve employment-related decision-making processes, but the rules governing their use is rapidly and regularly evolving. The authors of this article and the rest of Venable's Labor and Employment Group are here to help you navigate the legal landscape and implement these technologies, while also avoiding liability under federal, state, and local law. Please contact us for assistance with incorporating new technologies into your current employment processes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.