ARTICLE
15 November 2024

We Didn't Do It, The Robot Did: Practices And Perils For AI In HR

BB
Bass, Berry & Sims

Contributor

Bass, Berry & Sims is a national law firm with nearly 350 attorneys dedicated to delivering exceptional service to numerous publicly traded companies and Fortune 500 businesses in significant litigation and investigations, complex business transactions, and international regulatory matters. For more than 100 years, our people have served as true partners to clients, working seamlessly across substantive practice disciplines, industries and geographies to deliver highly-effective legal advice and innovative, business-focused solutions. For more information, visit www.bassberry.com.
As powerful artificial intelligence (AI) tools saturate markets, many organizations cite AI adoption as a priority. Labor-intensive functions, such as human resources (HR)...
United States Employment and HR

Highlights:

  • A U.S. federal-level, AI-specific law does not exist yet.
  • Government agencies, including EEOC and DOL, have opined on the applicability of current laws and regulations to AI deployment.
  • Regulators are clear that civil rights, privacy expectations, and accessibility protections remain, even in the context of AI.

As powerful artificial intelligence (AI) tools saturate markets, many organizations cite AI adoption as a priority. Labor-intensive functions, such as human resources (HR), are often at the forefront of the AI adoption push. 

Though AI feels new to some, many of the laws and regulations governing AI deployment are not. Not surprisingly, whether AI realizes its potential without generating crippling risk depends on how it is adopted and the safeguards implemented.

What is AI? AI comes in several flavors, but more recently, organizations and regulators appear focused on: 

  • Generative AI:  Technology that leverages AI to create content (e.g., text, images, videos, etc.) based on data that it has been trained on.
  • Large language models (LLMs):  Computational models that leverage large volumes of training data (e.g., books, articles, policies, procedures, evaluations, images, videos, etc.) and algorithms to analyze, understand, and synthesize training data to perform specified tasks, such as creating summaries and comparisons, identifying patterns, and answering questions. 
  • Deep learning:  A sophisticated version of machine learning that leverages artificial and often multilayered neural networks to simulate or copy the complex decision-making of the human brain to provide or perform tasks, such as generating predictions, offering insights, performing trend analysis, or engaging in automated iterative “conversations.”

Legal and Regulatory Frameworks to Consider

A U.S. federal-level, AI-specific law does not exist yet, but government agencies – including the U.S. Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL) – have opined on the applicability of current laws and regulations to AI deployment. Regulators are clear that civil rights, privacy expectations, and accessibility protections remain, even in the context of AI.

Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, and national origin. Making employment decisions derived from stereotypes or assumptions based on any of the foregoing is also prohibited.

The Americans with Disabilities Act of 1990 (ADA) prohibits employment discrimination against persons with disabilities. 

The Age Discrimination Employment Act of 1967 (ADEA) protects individuals ages 40 and over from employment discrimination on the basis of age (e.g., hiring, promotion, compensation, etc.).

Recent EEOC guidance reminds employers that a neutral test or selection procedure that has the effect of disproportionately excluding protected-class individuals is prohibited. The EEOC reinforces that algorithmic decision-making tools qualify as a “selection procedure.” Further, employers remain responsible for their agents, including AI vendors or any agent that uses such tools.

EEOC guidance also cites several ways by which AI use might violate the ADA or ADEA – for example, if the AI-based screening or analysis of incoming resumes or employee performance documentation intentionally or unintentionally screens out applicants on the basis of a disability or age. In one case, it was found that the AI used by iTutorGroup Inc. automatically rejected or screened out applicants over certain ages.

At the state level, a handful of states or localities have enacted AI-specific laws directly impacting common HR use cases. 

The Colorado Artificial Intelligence Act requires employers that use AI for the provision or denial of job opportunities to take “reasonable care” to protect individuals from algorithmic discrimination and provide notice to individuals regarding the use of AI and information it will collect. Employers using AI for this purpose must also create AI risk management policies and complete periodic impact assessments. 

The Utah Artificial Intelligence Policy Act requires disclosure or notice when an individual interacts with AI. 

The Illinois Artificial Intelligence Video Interview Act (AIVIA) requires employers who ask job candidates to record video interviews and proceed to analyze such videos using AI must notify applicants of the AI use before the interview. Prior consent is required. 

Maryland Code Section 3-717 prohibits employers from using certain facial recognition technologies and analysis functionalities during the interview process to create a facial template. Applicants can consent to such use by signing a waiver.

New York City Local Law 144 (Automated Employment Decision Tools (AEDT) Law) prohibits the use of an AEDT by employers unless the AEDT has been subject to a bias audit within the last year, and information about the bias audit must be made publicly available. At a minimum, the bias audit information must disclose selection rates across gender and race or ethnicity categories. Applicants must be informed of the intended use of an AEDT and provided a mechanism by which to request an alternative selection process.

The above are in addition to the landmark enactment of the California Consumer Privacy Act as amended by the California Privacy Rights Act (collectively, the CCPA). In this context, the CCPA requires both notice of the categories of personal data to be collected and any use cases that may involve automated decision-making or profiling.

Tips for AI Deployment in HR

1. Identify current or planned use cases. HR functions must remain aware of any current or planned AI use cases impacting employees or employment-related functions. Myriad laws protect certain rights and expectations of these individuals, and – as the EEOC has made clear – employers are likely to be held liable for any bias, discrimination, or wrongful termination claim that occurs as a result of AI (or a vendor's use of AI). 

Take steps to understand AI use cases and tools to inform whether the organization has a lawful basis for collecting any personal information that may be processed and institute any notices or affirmative consents that may be required.

2. Audit and assess AI tools. Undertake comprehensive assessments and conduct diligence to understand what steps have been taken to evaluate whether the tool results in or perpetuates bias or discrimination and what type of data the tool ingests or analyzes. This may be in the form of a data protection impact assessment or a set of required vendor onboarding questions. 

3. Understand any training data or feedback loops. Understand the training data sets underlying the AI to protect against a data set already infected with biased or discriminatory tendencies. If the training data is biased, the output likely will be, too. 

4. Provide transparency about AI use.  Even if not subject to the CCPA or the AI-specific video interview laws seen in Illinois or Maryland that mandate transparency, EEOC guidance indicates that regulators are focused on transparency about what types of AI-based tools may be used not only when applying but also in day-to-day functions, such as performance monitoring and compensation. 

5. Implement protections.  On a continuing basis, organizations must deploy deliberate and quantifiable mechanisms to protect against bias or discrimination. This may also include pausing or sun-setting a particular AI tool if the tool or use case is determined by a court or the organization to result in biased or discriminatory outcomes. 

Data security and privacy also represent a material risk for AI deployment. AI tools and vendors must be vetted for and audited against comprehensive data security measures and contractual commitments to privacy.

Countless and exciting AI use cases will continue to permeate HR functions. HR teams must remain committed to understanding AI tools and use cases, proactively communicating such tools and use cases in a transparent manner, and leveraging them to protect against rather than magnify bias and discrimination.

Originally published by HR.com's HR Legal & Compliance Excellence

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More