A significant percentage of employees in the United States are using Artificial Intelligence (AI) in the workplace. The use of AI will only grow with time because of its potential to improve productivity and efficiency. Employers need to act now to ensure that they comply with employment-related laws in this evolving legal landscape.

What Is AI?

Artificial Intelligence, or AI, refers to the development of computer systems and software that can perform tasks requiring human intelligence. It involves creating algorithms and models that enable machines to perceive, learn, reason, and make decisions in a manner that emulates human cognitive abilities. AI systems are designed to analyze vast amounts of data, recognize patterns, make predictions, and adapt to changing circumstances.

AI in the Workplace

AI is already being widely used in the workplace. Some surveys report that nearly 6 out of every 10 employees use generative AI in the workplace. Many of these employees use AI at work without their employer's knowledge.

AI has numerous applications in the workplace. For example, human resources professionals commonly use AI tools to help with recruiting and hiring efforts, where AI algorithms assist in identifying qualified candidates and streamlining the selection process. The use of AI tools at work, generally, and with recruiting and hiring efforts, specifically, can potentially expose employers to legal risk if the use of these tools runs afoul of federal, state, and local anti-discrimination statutes.

Existing Legislation That May Be Implicated By The Use Of AI

The use of AI must comply with existing employment laws.

Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, and national origin. In May 2023, the Equal Employment Opportunity Commission (EEOC) issued a technical assistance document explaining Title VII's application to an employer's use of artificial intelligence in the workplace. The technical guidance urges employers to be mindful of potential bias and discrimination in their automated systems and to regularly evaluate and monitor AI systems to ensure compliance with Title VII and other anti-discrimination laws.

The Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities in various areas, including employment. When it comes to the implications of the ADA for AI, there are considerations around ensuring accessibility and non-discrimination for individuals with disabilities in the design and use of AI systems. In May 2022, the EEOC issued a technical assistance document that explains the application of the ADA to an employer's use of artificial intelligence in the workplace. The technical guidance explains that an employer's use of an algorithmic decision-making tool may be unlawful because (1) the employer does not provide a reasonable accommodation necessary for a job applicant or employee to be rated fairly and accurately by the algorithm; (2) the employer relies on an algorithmic decision-making tool that intentionally or unintentionally screens an individual with a disability, even though that individual is able to do the job with a reasonable accommodation; or (3) the employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA's restriction on disability-related inquiries and medical examinations.

The Age Discrimination in Employment Act (ADEA) prohibits employment discrimination against individuals 40 years of age or older. When considering the implications of the ADEA for AI, it is important to ensure that AI systems used in the workplace do not inadvertently discriminate against older workers or contribute to age-related biases. In fact, one of the EEOC's earliest settlements followed allegations that the employer programmed its recruitment software to reject older applicants automatically.

Additionally, some state and local laws in the United States have imposed restrictions on the use of AI in the workplace (e.g., Illinois, Maryland, and New York City). These laws aim to address potential biases, discrimination, and privacy concerns associated with AI systems. For example, a New York City local ordinance requires that if an employer is using AI tools in hiring, the AI tools must be subject to an independent bias audit showing that the algorithms are unbiased, and the employer must make a summary of the most recent bias audit results on its website. Other states have proposed legislation that is winding its way through the legislature.

Organizations should consult legal experts or employment law specialists to ensure that the use of AI is consistent with applicable law. Furthermore, to the extent that the use of AI results in layoffs or reductions in force, employers should ensure that they are mindful of the federal Worker Adjustment and Retraining Notification Act and equivalent state and local laws.

New Legislation Intended to Address the Use of AI

While legislation regarding AI has been introduced, there is currently no existing federal legislation specifically governing its use in the employment context. Nonetheless, various executive enforcement agencies have issued guidance related to the application of AI.

The Bureau of Consumer Financial Protection, the Department of Justice, the Equal Employment Opportunity Commission, and the Federal Trade Commission have issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. Within this Joint Statement, these enforcement agencies reaffirmed their commitment to protecting civil rights, fair competition, consumer protection, and equal opportunity. The Statement explains that potential discrimination in automated systems may come from different sources, including problems with (1) data and datasets, (2) model opacity and access, and (3) design and use.

In October 2022, the White House issued a Blueprint for an AI Bill of Rights. This Blueprint articulates several principles for the safe and effective use of AI in the United States. The five principles articulated in the AI Bill of Rights for the safe and ethical development of AI include (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, considerations, and fallback.

In October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, outlining the initial steps for the safe, secure, and trustworthy development and use of AI in the United States and operationalizing the principles articulated in the White House's Blueprint for an AI Bill of Rights. Although the Order primarily applies to federal agencies and contractors, it sets a precedent for future AI development in other industries and sectors.

President Biden's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence tasks the National Institute of Standards and Technology (NIST) with developing guidelines, best practices, and testing standards for the safe and ethical deployment of AI. NIST has already published an Artificial Risk Management Framework that, while voluntary, should be used and reviewed by any company or industry developing AI.

What Employers Should Do Now

Employers should develop policies that address the use of AI at work. In addition, trainings should be offered regarding the use of and potential limitations regarding the use of AI at work. Additionally, employers can create guardrails from the unbridled use of AI by creating an environment where employees can test AI tools and services in a risk-free zone.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.