Following on from the Government's 2023 White Paper championing a 'pro-innovation approach' to the regulation of artificial intelligence (AI), the House of Commons released a more detailed analysis on the interplay between AI and employment law. Despite not proposing any new regulatory or legal changes, it provides a helpful overview of some the current uses of AI in the workforce, the legal challenges facing these uses and the different views taken by various political and social interest groups.

Currently, there are three main uses for AI at work - recruitment, task distribution and performance management, and workforce surveillance/monitoring. These technologies are more prevalent in large companies, particularly those in professional or technical industries.

In light of the Government's decision not to introduce any new legislation, the paper identifies a few challenges that the use of AI in this regulatory landscape poses. These are:

Mutuality of Obligation & Trust and Confidence

This is a common law requirement for an enforceable contract of employment, whereby the employee's promise to work personally for the employer is dependent upon the employer's promise to pay the employee. This requirement for 'personal service' on behalf of the employee is qualified by an implied term of mutual trust and confidence between the parties, which the Government report argues is at risk when implementing AI in the workplace.

It is believed that it will be harder for employers to satisfy this requirement if they base important decisions, such as those relating to for example disciplinaries, pay rises and promotions, either wholly or significantly on an AI system's conclusion. This is due largely to the fact that it may be more difficult for employers to explain how their decisions were reached and justify that they were made in good faith. That ability to explain how is unlikely to be optional, in many cases employers are obliged by law to be able to show their workings or use a human.

Despite the report not discussing this point, one may also argue that the requirement for personal service will be more difficult to satisfy on the part of the employee, particularly given the increasing use of AI technology such as ChatGPT in the workplace.

Use of AI in Dismissals

It has been suggested that the use of and reliance on AI in dismissing employees may risk a number of dismissals being found to be unfair. This is not least due to the fact that there could be flaws within the AI system itself but, also because of the issue of 'explainability'. The paper highlights that it is difficult to understand how AI systems make their decisions and what aspects of information are used to inform the end conclusion. In particular, it is noted that the IP rights of such AI systems also create an added difficulty in AI comprehension, as there is a reluctance to disclose this information to other competitors/developers in the industry. In other words, the main issue appears to be centred around the idea of there being lack of transparency.


This is hardly a new concern and has received a lot of media attention in the past few years particularly following reports that a hiring algorithm trialled by Amazon unfairly discriminated against female job applicants. Since then, concerns have been raised about Uber's facial recognition technology performing worse on people of colour. Depending on how the AI system was trained, it could be that the technology itself is biased against certain protected characteristics, contrary to the Equality Act 2010. This bias can manifest itself in terms of both direct and indirect discrimination (where the unfavourable treatment arises from the use of a "provision, criterion or practice" and disproportionately disadvantages a group of individuals).

While there is undoubtedly a reason for concern, the report noted that it is difficult to conclude that AI will be any more or less biased than their human counterparts. Further, it should not be ignored that the Equality and Human Rights Commission, amongst others, consider that the current employment legislative framework provides adequate protection against discrimination, which should be somewhat reassuring.

Surveillance, Privacy and Data Protection

It is becoming more common for employers to rely on AI technologies to conduct surveillance and track worker performance by using video surveillance, desktop monitoring or other methods. However, the desire of employers to deploy this kind of technology has to be balanced against employees' rights as data subjects under the GDPR and their right to privacy under Article 8 of the European Convention on Human Rights. There is an understanding in the UK that deployment of AI monitoring technology must be done: 1) with a 'lawful basis' under the GDPR, 2) in pursuit of a specified and lawful reason, and 3) in a way that is proportionate to that objective. There has been limited guidance to date from the government on the extent that employers are legally allowed to embark on such exercises. In part, that is because the tools available to both employers (and the government itself) are advancing faster than regulators can follow, with recent years having been a period of rapid development for both voice and facial recognition technology, as well as tools that offer automated analysis of data and imagery.

At the current point in time employers would be best advised to regard any deployment of a new automated monitoring technology as being an event that triggers their duty to carry out a risk assessment under GDPR's Article 35 to record their thinking about their lawful basis, underlying objectives and operational safeguards against unfair intrusion into employee privacy.

If you would like more information on the dos and don'ts of workforce surveillance, we are proposing to write an article on the subject shortly. Watch this space!

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.