ARTICLE
6 October 2025

Employment Tip Of The Month – October 2025

WE
Wilson Elser Moskowitz Edelman & Dicker LLP

Contributor

More than 800 attorneys strong, Wilson Elser serves clients of all sizes across multiple industries. It maintains 38 domestic offices, another in London and enjoys more extensive international reach as a founding member of Legalign Global.  The firm is currently ranked 56th in the National Law Journal’s NLJ 500.
While some states and jurisdictions do not have any direct law or statute regulating the use of AI during hiring, employers should be concerned about the potential bias in the output from AI used in employment decisions...
United States Employment and HR

Q: Can employers safely use artificial intelligence (AI) in the hiring process?

A: While some states and jurisdictions do not have any direct law or statute regulating the use of AI during hiring, employers should be concerned about the potential bias in the output from AI used in employment decisions and take steps to ensure that they are not in violation of discrimination and privacy laws.

Background on Use of AI
The immersion of AI has accelerated rapidly in recent years, changing the way we work across many industries. From automating routine tasks to assisting in making complex decisions, AI has brought exciting possibilities. With that being said, this rapid advancement and expansion has raised serious concerns regarding AI in the employment context, particularly around algorithmic bias, workplace surveillance, and job displacement. As AI becomes more integrated into workplace management, it's imperative for employers to stay ahead of the evolving legal standards to ensure compliance, protect employee rights, and manage risk around accountability and fairness.

Public officials and legislatures are increasingly focused on the potential risks and benefits that come with using AI technology. In the employment realm specifically, laws affecting employers include those that regulate the use of automated employment decision tools (AEDTs) in the hiring process. Examples of this legislation can be seen in New York Local Law 144 and Illinois H.B. 3773. New York Local Law 144 was the first of its type to create obligations for employers when AI is used for employment purposes. Illinois followed by enacting H.B. 3773, making it the second state to pass broad legislation on the use of AI in the employment context.

More legislation is on the way in 2025, generally falling into two distinct categories. The first would require employers to provide notice that an automated decision-making system for the purpose of making employment-related decisions is in use. The second aims to limit the purpose and way an automated decision system may be used. This article will highlight proposed, enacted, and failed legislation, and offer takeaways about what to be aware of moving forward.

Currently Enacted Legislation
New York has followed a broad trend seeking to bring transparency to the use of automated decision systems, including AI, in employment and other areas through two pieces of legislation. New York Local Law 144, which took effect on January 1, 2023, prohibits employers and employment agencies from using an AEDT in New York City unless they ensure a bias audit was done and provide notice. If employers or employment agencies use an AEDT to substantially help them assess or screen candidates at any point in the hiring or promotion process, they must comply with the law's requirements. The next piece of legislation, New York S.B. 822, effective July 1, 2025, amended existing law on AI and employment regarding state agencies and prohibits the use of AI to affect existing rights of employees pursuant to a collective bargaining agreement.

Illinois joined New York in passing legislation to regulate AI and the risks associated with its use in the employment context. H.B. 3773 amended the Illinois Human Rights Act (IHRA) and affects any employer who currently uses, or intends to use, AI, including generative AI, to make decisions around recruitment, hiring, promotions, training, discharge, or any other term or condition of employment. The amendment prohibits employers from using AI in ways that may lead to discriminatory outcomes based on characteristics protected under IHRA. Additionally, employers are required to give notice if they are using AI in this realm. The Illinois Department of Human Rights and Illinois Human Rights Commission will enforce the law, with remedies possibly including back pay, reinstatement, emotional distress damages, and attorneys' fees. This goes into effect January 1, 2026.

Additional legislation has been enacted in Colorado, Maryland, California, and other states. The Colorado AI Act, which will take effect on February 1, 2026, is designed to regulate the use of high-risk AI systems by imposing compliance obligations on developers of the systems and the businesses that use them. The Act is designed to encompass the employment context sphere, resulting in Colorado employers being subject to the law. On June 30, 2025, California provided regulations under the Fair Employment and Housing Act that address use of AEDTs in employment decisions, effective October 1, 2025. Maryland has also implemented legislation, section 3-717, that forbids the use of facial recognition services to create a facial template during an applicant's interview without a waiver signed by the applicant.

Failed Legislation
Despite the influx of enacted legislation regarding AI in employment, much has failed to be passed into law.

  • In Connecticut, a bill that would have implemented AI protections for employees and limited the use of electronic monitoring by an employer failed.
  • In Texas, three separate bills failed to pass, the first relating to AI training programs and attempting to impose requirements on the developers of these systems. An additional proposed bill prohibited state agencies from using an automated employment decision tool to assess a job applicant's fitness for a position unless the applicant was notified and provided with information, and any bias was mitigated.
  • In Georgia a bill was proposed to prohibit surveillance-based price discrimination and wage discrimination, but ultimately failed.
  • Lastly, the Nevada legislature proposed a bill to require AI companies to maintain policies to protect against bias, generation of hate speech, bullying, and more. The bill would have imposed requirements on employers, landlords, financial institutions, and insurers to uphold these standards.

Even legislation that reached the final stage of the process has had difficulty being passed. For example, Virginia Governor Glenn Youngkin vetoed an AI bill on March 24, 2025, that would have regulated how employers used automation in the hiring process. Specifically, the bill would have regulated both creators and users of AI technology across multiple use cases, including employment. Youngkin stated that he vetoed the bill out of fear that it would erode Virginia's progress in attracting AI innovators and tech startups.

All states remain very interested in regulation of these emerging AI tools and features but have yet to align on the best way to handle such regulations. Much of the legislation that fails lacks specificity and the level of intricacy that the passed laws on this topic contain.

Proposed and Pending Legislation
Despite frequent failure of legislation regarding AI in the employment context, there is an array of pending legislation throughout the United States in 2025, with three overlapping themes.

  • The first is aimed at requiring employers to provide notice that an automated decision-making system for the purpose of making employment-related decisions is in use (see California Senate Bill 7, Illinois Senate Bill 2203, Vermont House Bill 262, Pennsylvania House Bill 594, New York Senate Bill 4349 and 185).
  • The second is legislation aimed at limiting the purpose and way an automated decision system may be used to make decisions (see California Senate Bill 7, Colorado House Bill 1009, Massachusetts House Bill 77, New York A.B. 3779 and 1952).
  • The third is legislation aimed at allowing bargaining over matters related to the use of AI between the state and its employees (see Washington Senate Bill 5422). States such as New York aim to expand measures to hold employers accountable for AI-driven employment decisions, where others including Massachusetts hope to develop a comprehensive legal structure to deal with these types of issues that present in AI and employment.

What This Means for Employers
The influx of legislation makes it imperative that employers pay careful attention and strengthen their AI compliance practices. In doing so, employers should focus on the follow important points.

Be Transparent: Job candidates and employees should be informed when AI tools are used in their selection process or evaluations. On the flip side, employers may want to ask for confirmation that candidates did not use AI to produce application materials.

Prepare for Accommodations with AI Use: Have accommodation plans in place should a candidate seek a disability accommodation, particularly recognizing that many laws and federal regulations instruct employers to provide an alternative to the AI tool.

Develop AI Use Policies: In crafting policies, employers should consider how their employees may use AI along with how employers want them to use the technology. Policies should have usage guidelines and best practices.

Check and Audit Vendors: Employers should be mindful in selecting AI vendors that ensure their systems are not biased, can be audited, and can duly address reasonable accommodations in the recruiting and hiring process. If possible, employers should (1) require that representations used by AI tools in workplace contexts are legally compliant and (2) attempt to negotiate indemnification protections from AI vendors and obtain their cooperation in defending against related claims.

Validate Results: Employers should ensure a diverse applicant pool before the application of AI and consider hiring an industrial organization psychologist to conduct validation research. Validate the results of the use of the tool and compare to results human decision-makers have obtained.

Stay Informed and Stay Tuned to Legal Shifts: It is important to stay up to date on existing and pending legislation related to AI to ensure AI tools are consistent with federal, state, and local law, and to update policies and practices consistent with legal developments.

Retain Human Oversight: Ensure critical decisions aren't made solely by automated tools. Train HR teams on when to override algorithmic rankings, and audit results for desired (and non-discriminatory) outcomes.

Avoid Litigation Regarding AI Workplace Tools: To avoid legal entanglement, businesses should carefully review any AI tools used for employment functions, potentially turning to both technical and employment law experts for independent audits to ensure they are not biased or otherwise potentially violating applicable employment laws. This is recommended even if their jurisdiction has no new AI-specific discrimination laws because of the rapid adoption of AI workplace tools and the potential for liability under existing non-AI-specific employment laws

With all of the above, consulting with an attorney in connection with best practices in employment can help navigate compliance with all applicable federal, state, and local laws.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More