Fort Lauderdale, Fla. (November 4, 2024) - In response to President Biden's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, on October 17, 2024, the U.S. Department of Labor ("DOL") published its long-awaited guidance on Artificial Intelligence ("AI") and Worker Well-Being. The DOL guidance provides a roadmap to ensure that employers' use of artificial intelligence enhances job quality and safeguards, rather than undermines workers' rights and well-being. Although the DOL notes that this guidance is neither binding nor intended to modify or supplant existing law, regulations, or policies, employers should take note because the DOL's guidance is a blueprint for best practices to follow across all sectors and workplaces.
The keystone principle (the "North Star," as the DOL calls it) is the use of AI for worker empowerment. According to the DOL, workers and their representatives, especially from underserved communities, are informed of and have genuine input into the design, development, testing, training, use, and oversight of AI systems in the workplace. As a means to that end, the DOL encourages employers to ensure ethical AI development, organizational AI governance with human oversight, transparency in AI usage, protection of labor and employment rights, and ensuring responsible use of worker's data.
The DOL's chief concern regarding the North Star principle is the potential for businesses and employers to use AI to make material and final decisions regarding all aspects of "hiring, job assignment/placement, reassignment, promotions, compensation, scheduling, performance standards, discipline, discharge, and any other decisions affecting the terms and conditions of employment," regardless of whether those decisions are legally sound. The DOL offers as specific examples its goal to avoid employer use of AI to take action against workers for discriminatory reasons, to retaliate against workers for engaging in protected activities, and to program AI models so that they have the capacity to make recommendations along those lines.
Given those pressing concerns, the DOL directs employers to protect worker's civil rights when using AI. Employers should assess the risks of algorithmic discrimination or bias and routinely audit the AI systems in place for disparate or adverse impacts on protected characteristics (e.g., race, color, national origin, age, disability, religion, etc.). In fact, under the North Star principle, the DOL empowers AI developers to restrict or contractually prohibit using their AI systems to limit worker's rights and safety. While this is reassuring guidance, employers cannot solely rest on the assurances of their AI providers.
The DOL also encourages transparency concerning AI usage with both current and prospective employees. Employers should provide workers and their representatives advance notice and appropriate disclosure if they intend to use worker-impacting AI. This disclosure should include an explanation of the purpose of the AI system; how job seekers or workers will engage with the worker-impacting AI system; and how the AI systems will be used to monitor workers, direct work, or inform significant employment decisions. Employers should also allow, upon request, a means by which employees can review their personal data the AI is collecting and a means by which they can submit corrections or clarification to the same.
The DOL also strongly encourages employers to create jobs to
review and refine data inputs used to create their AI systems,
which also provides the dual benefit of promoting the DOJ's
principle of ensuring human oversight of AI. Because humans both
write and understand the laws in ways that AI currently cannot, the
DOL tasks employers with human review of AI decision-making to
ensure compliance with federal, state, and local laws regarding
labor and employment, including anti-discrimination,
anti-retaliation, and wage-and-hour laws.
Although the DOL publication idealizes the job creation and skill
enhancement aspects of AI, it also acknowledges that human worker
job loss due to AI is inevitable. "[T]here inevitably will be
instances when organizations restructure, repurpose, or eliminate
specific job functions in their operations due to the use of
AI." To that end, the DOL encourages employers to protect and
augment human jobs by stating that employers implementing AI
"should" provide training and educate their
workforce.
This best practices paper is not the only DOL publication regarding the use of AI in labor practices. The DOL's Office of Federal Contract Compliance Programs published its own guidelines to clarify federal contractors' legal obligations, promote equal employment opportunities, and mitigate the potentially harmful impacts of AI in employment decisions. It echoes many of the points raised by the DOL as it applies to federal contractors. Individuals and businesses who contract with the federal government are required to ensure that they do not discriminate in employment and that they take affirmative action to ensure employees and applicants are treated without regard to their protected characteristics. Critically, a federal contractor cannot escape liability for the adverse impact of discriminatory screenings conducted by a third party, such as a staffing agency, HR software provider, or vendor. Thus, federal contractors must be vigilant to ensure that they are always in compliance with federal laws, even if they are not doing the particular task themselves.
Moreover, the DOL's Wage and Hour Division's Field Alert Bulletin, issued on April 29, 2024, addresses the use of AI under the Fair Labor Standards Act and other federal labor standards. The Bulletin states that blind reliance on AI may cause employers to undercalculate hours worked and wages owed, and thereby create liability under the FLSA. The Division also posed a scenario where AI used to determine FMLA leave eligibility by requiring the employee to furnish more protected medical information than necessary, which could expose the employer to legal liability. Finally, repeating one of the DOL's primary concerns, the Bulletin cautions employers that the use of AI to detect, target, or monitor employees based on their AI-predicted likelihood of engaging in protected activity may violate anti-retaliation provisions under several federal labor laws.
If there is a throughline with the DOL's publications, it is the need to maintain "the human in the loop" to ensure oversight over any AI-related employment tools. Employers simply cannot hand the reins over to AI systems and point the finger at the AI if something goes wrong. As highlighted by the Department of Justice, there is no "AI exception" to the law; "Discrimination using AI is still discrimination."
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.