ARTICLE
16 October 2025

Dystopian Times: Just Because AI Can Do Something, Should It?

GC
Gatehouse Chambers

Contributor

Gatehouse Chambers (formerly Hardwicke) is a leading commercial chambers which specialises in arbitration and all forms of ADR, commercial dispute resolution, construction, insolvency, restructuring and company, insurance, professional liability and property disputes. It also has niche specialisms in clinical negligence and personal injury as well as private client work.
AI can be a vital tool for businesses; but use it thoughtlessly and be prepared for legal consequences.
United Kingdom Technology
Gatehouse Chambers are most popular:
  • within Media, Telecoms, IT, Entertainment and Insurance topic(s)
  • with readers working within the Insurance and Construction & Engineering industries

AI can be a vital tool for businesses; but use it thoughtlessly and be prepared for legal consequences.

First published for International Employment Lawyer (IEL)

A TikTok video from recent graduate Tim Lee reignited the debate over the question: just because AI can do something, should it?

Lee shared how his employer has installed productivity tracking software that took screenshots of what he was doing every ten minutes when he was working from home.

The software also tracked the percentage of time spent doing any activities online as well as websites visited. Lee described being subjected to that level of monitoring as "pretty dystopian".

Employers have expressed concern about the productivity levels of employees who work at home. This has led to a number of big corporates insisting on a return to the office. At the start of this year, for example, JP Morgan insisted that all its employees return to the office five days a week.

But the return-to-work mandate has faced stiff opposition from employees, with parents showing strong resistance to the requirement for a full-time office presence.

Research from the King's Business School in May, found 53% of fathers with school-age children would quit or look for a new job if they were faced with a demand to return to work full time. This was higher among mothers, with only 33% saying they would comply with the same mandate.

For those employers who feel they cannot demand workers return to the office, productivity tracking software can appear to be an easy solution. Employers can reassure themselves that an employee is working rather than focusing on domestic tasks, by measuring activities such as keystrokes. However, the reality can be a level of micromanagement not possible in a traditional office environment.

The OECD produced a paper on "Algorithmic Management in the workplace" surveying 6,000 firms in France, Germany, Italy, Spain, Japan, and the US. The proliferation of such AI tools ranges from 90% in the US (at least one tool to instruct, monitor or evaluate workers) to 40% in Japan.

The polarisation of views on AI use is also evident on the issue of whether algorithmic management tools reduce bias. Around 60% of US managers believe AI tools reduce bias whereas the picture in Europe and Japan is more mixed. There are a sizeable number of managers (ranging from 27-36% dependant on country) who are unsure who holds responsibility for an incorrect decision, made using AI tools.

Further, managers are concerned about a lack of understanding about any decisions or recommendations made by algorithms and how these were reached.

From a worker's perspective, there are concerns about how monitoring tools adversely impact their job satisfaction and increase stress levels. There is therefore a significant number of workers who have serious doubts about how trustworthy the AI tools used for monitoring purposes are. This has a number of legal consequences.

The current status of UK government guidance on AI and employment law focuses on the need for companies to comply with the Equality Act 2010 as well as privacy and data protection laws. It should be recognised that the type and amount of data that can be collected by AI tools, can often fall within the definition of biometric data.

The Information Commissioner's Office's guidance on what constitutes biometric data specifically includes the way someone types, often captured by keystroke monitoring. For this type of data, explicit and informed consent is recommended as the most appropriate condition for making processing of that data lawful.

There is the further risk that uncritical use of AI tools may constitute automated decision-making, especially where there is no human oversight. This could occur in organisations where there are no systems in place to ensure human review of AI decisions. Increased use of AI monitoring tools may therefore require compliance with additional privacy laws.

There are additional biases that could be embedded in AI systems that could breach the Equality Act. Where an employee is disabled, there is a proactive duty under the legislation to make reasonable adjustments.

But currently, most AI tools do not operate to allow the exercise of discretion and are typically marketed as applying the same treatment to all individuals. Similarly, where crude metrics such as the number of keystrokes used is equated to a measure of productivity this could have discriminatory impact.

Studies suggest that older people, broadly speaking, type more slowly and deliberately, and younger people are faster but more prone to typographical errors. There are a number of similar permutations when using AI monitoring tools that are likely to lead to bias and discriminatory decision making on the grounds of protected characteristics.

In Europe, the AI Act has been drafted to create safeguards against pervasive employee surveillance. What is not clear is how the EU will ensure the proposed system of self-assessments and oversight mechanisms for high-risk AI (such as monitoring) will work. Where the users of the AI have limited understanding of the AI tools and how they operate, their answers to any regulatory investigations may lack depth.

The scale of the challenge for regulators cannot be overstated. Well-resourced regulators are likely to resort to using their own AI tools to properly analyse how the AI is working, including the data it is processing. Properly prepared employers should be able to explain how AI tools work, how they are used within their business and how they are audited through the use "life cycle" before they face regulatory action.

Employers will need to be aware of a number of additional challenges posed by monitoring software, not least the common law duties owed to take reasonable care of the health and safety of their employees.

The early signs are that interacting with AI, such as chatbots can negatively impact mental health. This can include behaviour change such as increased anxiety. It is arguably foreseeable that intrusive and frequent monitoring of workers, can mean employers are in breach of their duty of care to employees.

Companies may be minded to attribute fault to the designers and sellers of AI tools. However, the way AI tools are deployed and used by senior executives and managers will typically be the focus of the courts in legal systems where liability is attributed to humans rather than AI.

The use of AI can be a vital tool for businesses, but it should not be used thoughtlessly. This includes prioritising saving resources and time to check for errors made by AI tools.

Businesses cannot delegate responsibility for decision making to AI. The best use of AI will be where employers and employees understand why monitoring is in place and how it operates. Proper analysis of how the AI tools are to be used should be an essential first step and not an afterthought or legal liability could be the consequence.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More