ARTICLE
17 October 2024

Navigating The Intersection Between AI And Employment Law

M
Matheson

Contributor

Established in 1825 in Dublin, Ireland and with offices in Cork, London, New York, Palo Alto and San Francisco, more than 700 people work across Matheson’s six offices, including 96 partners and tax principals and over 470 legal and tax professionals. Matheson services the legal needs of internationally focused companies and financial institutions doing business in and from Ireland. Our clients include over half of the world’s 50 largest banks, 6 of the world’s 10 largest asset managers, 7 of the top 10 global technology brands and we have advised the majority of the Fortune 100.
AI is transforming the world of work. Forecast as the next Industrial Revolution, the convergence of AI is redefining roles, unlocking gaps in productivity and creativity...
Ireland Employment and HR

AI is transforming the world of work. Forecast as the next Industrial Revolution, the convergence of AI is redefining roles, unlocking gaps in productivity and creativity, and generating substantial efficiencies in workplace management. In reshaping the world of work, it also raises novel issues in the employment law context.

In this AI series, we will explore the legal issues connected with these changes and address practical tips for businesses in the wake of emerging AI regulation. In this introductory article we provide an overview of the considerations relating to AI and employment law.

What is AI?

Before delving into some uses of AI, what do we mean when we refer to "AI"? Simply put, AI is the use of computer technology to simulate human intelligence and decision making. AI has shot to prominence over recent years due to the proliferation of Generative AI tools, such as OpenAI's ChatGPT. It can create new text, images and other high quality content based on the data it was trained with.

How is AI used in the workplace?

We are seeing employers increasingly turning to AI to help improve efficiencies across all stages of the employment lifecycle. AI usage is often not visible or well-known in the workplace setting but it can include the following purposes:

  • Recruitment: Tools to efficiently screen applications to identify suitably qualified candidates, sift for keywords and draft job descriptions.
  • Training: Generate customised learning modules and training programmes for employees based on learning style, performance data and development goals.
  • Monitoring: Evaluate performance data, set productivity targets and allocate tasks.
  • Dismissal: Assessing and scoring candidates as part of a redundancy selection process.

What are the risks when implementing AI in the workplace?

Despite the many positive uses for AI technology in the workplace, implementing these tools without appropriate human oversight and intervention can result in the following risks:

1. Discrimination and Bias

Unintended bias in AI algorithms could arise if AI systems are being used for hiring, promotions, performance evaluations, or terminations. AI systems are typically trained using existing datasets. If the data used to train the AI is biased, the AI will perpetuate that bias. This may result in an employer unintentionally discriminating against protected groups (eg, based on race, gender, age, disability, etc). This could result in breaches of the employment equality legislation in Ireland.

Employers should be particularly mindful of the obligation to provide reasonable accommodation to disabled employees and applicants and should check whether usage of AI in a process, such as an AI driven recruitment assessment, may be disadvantageous to certain disabilities and what adjustments may be required to alleviate the disadvantage.

2. Unfair Dismissal

Employees who have one years' continuous service have the right not to be unfairly dismissed. An employer must be able to demonstrate that dismissal was for a fair reason, followed a fair process and that dismissal was a reasonable response in the circumstances. Where AI is used as a means to prompt a dismissal decision, such as scoring in a selection or performance process, this carries significant risk and an employer will need to demonstrate appropriate human oversight and fair decision making in the process.

3. Emerging Legal and Regulatory Requirements

As new regulations governing the use of AI (such as the EU AI Act) are introduced, employers must ensure compliance with these laws. Non-compliance could result in penalties or legal challenges. For example, under the new EU AI Act, non-compliance with certain AI practices can result in fines up to 35 million EUR or 7% of a company's annual turnover. Employers may need to establish governance frameworks to ensure that AI is used ethically and legally, including regular audits, assessments, and reporting to demonstrate compliance. Employers need to be proactive in understanding these potential legal issues, implementing compliance measures, and ensuring transparency and fairness when using AI in the workplace.

4. Data Protection

Employers using AI to monitor employee performance or behaviour must ensure compliance with data protection laws, such as the General Data Protection Regulation ("GDPR"). Collecting personal data without a lawful basis under the GDPR or Data Protection Act 2018 or using data for purposes beyond what was disclosed could lead to breach of this data protection legislation. Other uses for AI such as continuous surveillance (like tracking employees' keystrokes, emails, or movements) could lead to data protection violations if it goes beyond what is necessary and proportionate.

5. Vicarious Liability

In order to mitigate the risks and reap the rewards of AI and generative tools, employers first need to be aware that they will be vicariously liable for the actions of their employees if these AI tools are used in an unlawful or inappropriate manner. Providing guidance and training on the use of AI in the workplace and having policies in place will be particularly important in an effort to mitigate these risks.

6. Health and Safety

AI systems that control workplace environments or machinery must comply with occupational health and safety legislation. If an AI-driven system malfunctions and causes an accident or injury, employers could face claims for failing to provide a safe workplace. In addition, constant monitoring through AI tools could lead to mental health issues, such as stress or anxiety, potentially resulting in claims against an employer.

7. Training and upskilling employees

One major issue is the need to reskill and upskill employees to work effectively alongside AI tools, which can require significant investment in training and education. Employers are likely to face employee concerns over potential job displacement, leading to employee resistance and a need for careful management of workforce transitions.

How can employers reduce risks relating to AI in the workplace?

There are a number of proactive steps employers can take to minimise potential risks relating to AI in the workplace:

1. Audit and Test AI systems

Identify, audit and understand the current and proposed future uses of AI systems within the business. Conduct regular testing of AI systems with human oversight to identify any risks of algorithmic bias forming and take proactive steps to address issues.

2. Conduct Equality Impact Assessments

In the design and implementation phases of AI systems, conduct an assessment of any potential bias risk and/or adverse impacts that the proposed usage may have on certain protected groups, any objective justifications for differential treatment and remedial steps that may be taken.

3. Develop Ethical AI Guidelines, Training and Governance

Create policies that govern the ethical use of AI, including privacy, bias mitigation, transparency, and accountability. A key step would be to assign a team to oversee AI implementation and monitor for unintended consequences, such as biased decision-making or data protection concerns and ensure appropriate training is provided to all employees using AI systems in the workplace.

4. Ensure Data Privacy and Security

Implement strong technical and organisational security measures to safeguard against data breaches and ensure compliance with data protection law.

5. Prepare for Regulatory Compliance

Ensure that AI implementations comply with legal standards to avoid legal risks and penalties and keep up with evolving laws and regulations related to AI, such as the new EU AI Act (which we will discuss in our next article in the AI in the Workplace series).

By taking these steps to embrace innovation, employers can better prepare for the opportunities and challenges that AI brings to the workplace, ensuring that both the organisation and its employees thrive in an increasingly automated world.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More