Artificial intelligence (AI) tools, despite their nascency, have already had a significant impact on human resources (HR) professionals, with the potential to revolutionize various aspects of their work. These tools have streamlined and automated many HR processes previously performed only by humans, such as talent acquisition, employee onboarding, performance evaluation, and workforce analytics. By leveraging AI algorithms and machine learning, HR professionals can enhance efficiency, make data-driven decisions, and improve the overall employee experience. Relying on AI in HR decision-making is understandably highly appealing. According to recent data from the Society for Human Resource Management, the average cost per new hire exceeds $4,300—added efficiencies in HR processes can drastically affect an organization's bottom line. That, at least in part, explains why more than 80% of organizations use AI to make employment decisions in one form or another.

However, the implementation of AI tools in HR also presents significant legal compliance challenges. One major concern is the potential for bias and discrimination. AI algorithms are trained on historical data, which may contain inherent biases, leading to biased decisions in areas like recruitment and performance evaluation.

To overcome these challenges, HR professionals must actively collaborate with AI experts and data scientists to develop AI systems that are fair, transparent, and compliant. They need to invest in training and education to understand the intricacies of AI technology, its limitations, and the potential biases it can introduce. Establishing clear policies and guidelines for AI tool usage, including regular audits and reviews, is crucial to identify and rectify any legal compliance issues. Ultimately, HR professionals should maintain a human-centric approach, using AI tools as aids rather than replacing human judgment and ethical decision-making in order to strike a balance between efficiency and legal compliance in the ever-evolving landscape of HR.

This article first briefly addresses the budding regulatory landscape relating to AI tools for employment decision-making. It concludes with some suggestions on risk management and positive approaches that HR professionals can consider as innovative AI tools continue to improve and be incorporated into HR processes.

Recent regulatory action

AI tools are increasingly the subject of legislation and litigation. In October 2021, the Equal Employment Opportunity Commission (EEOC) launched an initiative relating to the understanding and regulation of AI use in employment decision-making. The EEOC's Artificial Intelligence and Algorithmic Fairness Initiative is intended to examine how technology impacts the way employment decisions are made, and give applicants, employees, employers, and technology vendors guidance to ensure these technologies are used lawfully under federal equal employment opportunity laws. Over the past year and a half, the EEOC has held listening sessions with key stakeholders about algorithmic tools and their employment ramifications; gathered information about the adoption, design, and impact of hiring and other employment-related technologies; and begun to identify promising practices.

Perhaps most importantly for HR professionals, the EEOC has begun issuing guidance regarding how organizations can use AI in employment decisions in a lawful manner. In May 2022, it published "technical assistance" relating to compliance with the Americans with Disabilities Act (ADA) when using AI and other tools to hire and assess employees. This educational publication focuses on accessibility requirements and how to make reasonable accommodations when using AI tools.

Consider the following example: an organization uses an AI chatbot designed to engage in written communications with potential applicants. An applicant might ask the chatbot about available openings, the necessary qualifications thereof, and be directed to the application. The benefits of this automated tool are apparent—the chatbot can respond in seconds, whereas human engagement could take hours or days—but there are risks as well. For example, the chatbot might automatically reject applicants with significant gaps in their employment history. If those gaps were caused by a disability (e.g., the applicant was not working to undergo treatment), then the organization could be exposed to claims of discrimination based on the AI tool's decision. The organization employing this chatbot tool should consider how to mitigate this outcome; implementing human review of AI decisions is one possible solution.

EEOC technical assistance does not have the force of law, and is not binding, but it provides a helpful insight into potential enforcement actions that the agency may take in the future. Indeed, the reliability of the May 2022 technical assistance document was underscored in January 2023 when the EEOC published its draft Strategic Enforcement Plan (SEP), outlining where and how the EEOC will direct its resources. The EEOC plans to focus on "the use of automatic systems, including artificial intelligence or machine learning, to target advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups."

In May 2023, the EEOC further demonstrated its commitment to its enforcement plan by issuing another AI-related technical assistance publication, this time about how AI tools can implicate Title VII compliance. The May 2023 Title VII technical assistance encourages organizations to evaluate AI-powered employment screening tools under the 1978 Uniform Guidelines on Employee Selection Procedures, which provide guidance about how to determine if decision-making procedures are compliant with Title VII disparate impact analysis. Though significantly narrower than the ADA publication, the EEOC's May 2023 Title VII technical assistance may signal that the EEOC believes AI tools can and should fit within existing regulatory structures.

Whether the EEOC's regulatory enforcement strategy will be workable remains to be seen. Regardless, the May 2022 ADA technical assistance, January 2023 SEP, May 2023 Title VII technical assistance, and any future guidance issued by the EEOC should be carefully heeded by HR professionals to mitigate regulatory liability and to ensure their organizations' compliance with legal duties.

What to do?

The proliferation of AI tools has the potential to ease administrative burdens but adds new concerns with legal compliance. HR professionals are likely familiar with exposing and avoiding bias in decision-making by humans—but how can that bias be avoided when decisions are outsourced to AI? Organizations should take a proactive approach to scrutinizing and integrating AI tools into their operations with legal compliance in mind. The following are several recommendations for how to do so:

  1. Identify how AI will be used. Organizations can use AI tools in myriad ways, but before stakeholders can make an informed decision on implementing such tools, they should understand exactly how and why the technology will be used. This will allow them to ensure compliance with legal requirements, evaluate risks, and employ risk-mitigation measures.
  2. Test the AI. An ounce of prevention is worth a pound of cure. AI tools should be tested prior to implementation to determine whether they exhibit any bias or other negative factors that could impact their usability. There are a variety of pre-implementation measures that an organization can take to reduce potential bias in AI outputs, including: rely on multiple diverse and representative training data, regularly review and test outputs for bias, and/or disclose training data and algorithms to enhance transparency and accountability.
  3. Establish guidelines. Once an organization defines how it wants to use an AI tool, and tests it to ensure it is unbiased, it should prepare a policy establishing guidelines for the use of the tool. Such guidelines might include identifying who may use the tool, the types of data that may be entered into the tool, the level of decision-making that can be based on the output (i.e., none, preliminary subject to human review, final), how the output will be described to its users, and information about the organization's governance and oversight of the AI tool.
  4. Actively and frequently address legal compliance. Organizations should rely on their legal counsel to check for recent legal developments and their implications before launching any AI tool. Organizations should continue to periodically re-assess the situation post-implementation as well. Needs may have changed, or the vendor may have expanded, reduced, or otherwise changed the tool such that it is no longer an appropriate fit for the organization. Risks should be frequently re-evaluated to understand how they have changed and to ensure previous mitigation measures are still effective.

Conclusion

HR professionals are pioneers in implementing cutting-edge AI technology for use in employment decisions. It is therefore crucial to be intentional and thoughtful about integrating these tools into their decision-making processes. Ongoing commitment to monitoring and assessing the technology and the law is critically important to maintaining legal compliance once AI is incorporated into HR decisions. Informed, deliberate implementation of AI technology can effectively and compliantly maintain the inimitable "human" aspect of "human resources."

Originally published by HR Professionals Magazine

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.