ARTICLE
30 January 2026

AI Use In The Workplace: What Employers Should Do Now To Manage Risk

FH
Ford & Harrison LLP

Contributor

FordHarrison is a labor and employment firm with attorneys in 29 offices, including two affiliate firms. The firm has built a national legal practice as one of the nation's leading defense firms with an exclusive focus on labor law, employment law, litigation, business immigration, employee benefits and executive compensation.
Artificial intelligence tools, particularly generative AI, are increasingly being used in the workplace, often through informal adoption driven by individual employees...
United States Technology
Ford & Harrison LLP are most popular:
  • with Senior Company Executives, HR and Inhouse Counsel
  • with readers working within the Business & Consumer Services industries

Artificial intelligence tools, particularly generative AI, are increasingly being used in the workplace, often through informal adoption driven by individual employees rather than enterprise-level deployment decisions. Although comprehensive regulation of artificial intelligence remains unsettled, AI-assisted work is already a reality for many employers, frequently without formal guidance, oversight, or documentation. As a result, employers may lack insight into how these tools are being used, what data is being shared, and who is accountable for AI-assisted outputs—creating exposure before an issue arises.

The legal framework governing workplace AI use is evolving rapidly and, in some respects, becoming less settled rather than clearer. In 2025, the Trump Administration issued an executive order reversing prior federal AI guidance, and the EEOC subsequently removed technical assistance materials addressing AI bias and discrimination. At the same time, multiple states have enacted or finalized AI-specific employment laws taking effect this year, including Colorado, Illinois, Texas, and California. Together, the withdrawal of federal guidance and the emergence of divergent state-level requirements have increased uncertainty for employers, particularly regarding how responsibility for AI-assisted employment decisions will be assessed when third-party tools are involved.

How Artificial Intelligence Is Being Used in the Workplace.

Across industries, employees are using AI-enabled tools to assist with both routine and substantive work activities. Common uses include drafting written communications, policies, and performance documentation; screening or summarizing job applications and resumes; preparing evaluation materials; generating training content or internal guidance; and assisting with disciplinary or termination documentation. Many of these tools are publicly available or embedded within existing software platforms.

In many organizations, AI tools are being used without coordination with legal, human resources, or information technology functions. While such use may increase efficiency, it can also introduce risk where employers lack clear parameters governing acceptable use, appropriate oversight, and accountability.

Key Employment Related Risk Areas

Unmanaged AI use may implicate several areas of employment law and workplace risk, particularly where AI assisted outputs influence employment decisions.

Data Privacy and Confidentiality

Employees may input sensitive information into AI tools, including employee and applicant data, compensation details, medical information, or confidential business information. Depending on the tool and its configuration, this information may be retained, processed, or used in ways that are not fully transparent to the employer, increasing privacy, confidentiality, and compliance risk.

Hiring, Promotion, and Disciplinary Decisions

AI-assisted screening, evaluation, or drafting related to employment decisions may raise concerns regarding bias, disparate impact, and documentation accuracy. These risks are heightened where AI tools are used to influence or support hiring, promotion, discipline, or termination decisions without clear standards, transparency regarding inputs, independent human review, and accountability for final decision-making.

Accuracy and Reliability of AI Generated Content

AI-generated content is inherently susceptible to inaccuracies, including so-called hallucinations: outputs that appear authoritative or plausible but are inaccurate, incomplete, or fabricated. When employers rely on AI-generated content without meaningful human verification, they may be unable to credibly defend the accuracy or basis of that content if it is later challenged.

The use of unverified AI outputs in personnel files, employment decisions, or external communications can significantly increase litigation and regulatory exposure. This risk is heightened where an employer cannot explain why erroneous information was relied upon or demonstrate that meaningful human review occurred before the content was used.

Litigation and Discovery Considerations

The use of AI tools can significantly complicate discovery obligations and create litigation exposure once a dispute arises. Employers may be required to identify which AI tools were used in connection with contested employment decisions, produce prompts or inputs entered into AI systems, explain how AI-generated outputs were reviewed and incorporated into final decisions, demonstrate what human oversight occurred and by whom, and preserve AI-generated content relevant to the dispute.

An employer's credibility may be undermined if it cannot clearly articulate how AI tools were used, who exercised judgment, and whether decisions were driven by human reasoning rather than automated output. In the absence of clear policies, controls, and documentation, responding to these demands can be time-consuming, costly, and damaging to an employer's litigation posture.

Observations From Current Practice: A Governance Gap

Across organizations, a consistent governance gap is emerging around workplace AI use. Employees are often using generative AI tools without approval, guidance, or training, and there is frequently no organizational clarity regarding what information may or may not be entered into AI systems. AI usage practices commonly vary across departments within the same organization, with little coordination or centralized oversight.

Employees often receive limited training regarding AI limitations, associated risks, or the need for human verification of AI-generated outputs. There is also frequently no clear accountability for AI-assisted work product, and limited documentation regarding which AI tools are in use or how they are being deployed. These gaps are not theoretical; they often become visible only when an employer must defend an employment decision and explain what role, if any, AI played in that decision.

Elements of an Effective AI Usage Policy

A well-designed AI usage policy serves multiple risk-management functions. It establishes clear boundaries for acceptable use, creates accountability for AI-assisted work product, and helps generate documentation that may be critical in defending employment decisions. There is no single model that fits every organization, but effective policies consistently address a core set of issues.

These issues typically include defining the scope of covered AI tools and AI-enabled features; specifying permissible and prohibited uses; restricting the types of data that may be entered into AI systems, including employee personal data and confidential business information; and requiring meaningful human review before AI-assisted outputs are relied upon. Effective policies also address guardrails for the use of AI in hiring, promotion, discipline, and termination decisions; oversight of AI functionality embedded in third-party platforms; vendor diligence expectations; and accountability for AI-assisted work product. Policies are most effective when they are practical, clearly written, and aligned with how employees perform their work.

Steps Employers Should Consider Taking Now

Employers need not resolve every AI-related issue immediately, but several near-term steps can meaningfully reduce risk. Employers should begin by identifying how AI tools are currently being used within the organization, including AI functionality embedded in third-party platforms.

Employers should also consider adopting baseline policy guidance that defines permissible uses, prohibits specified categories of inputs, and requires meaningful human oversight before AI-assisted outputs are relied upon. Targeted training for human resources personnel, managers, and other users is often essential to ensure consistent implementation and reinforce verification expectations.

Finally, employers may wish to review hiring, promotion, performance management, and disciplinary processes to determine whether AI tools are being used and whether appropriate controls are in place. Coordination among legal, human resources, information technology, compliance, and procurement functions can help monitor vendor terms, maintain documentation, and update internal guidance as legal requirements and business practices evolve.

Looking Ahead

Artificial intelligence tools will continue to evolve, as will the legal and regulatory landscape governing workplace use. Employers that take a measured and practical approach now, with a focus on governance, accountability, and training, will be better positioned to adapt as expectations continue to develop.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More