Artificial Intelligence ("AI") is making headlines for its capabilities and due to fears of misuse.

Assisted technology has been utilised in the workplace for several years, including through functions like website chatbots. A report conducted by Deloitte on 14 July 2023 found that nearly 4 million people in the UK have already used generative AI for work. A recent survey by People Management has found that 66% of employers want to embrace AI - currently, only one in five workers are using it.

However, employers need to recognise the potential for unforeseeable consequences. It is difficult to predict the exact extent of how AI will be utilised and integrated into our daily lives. However, there are ethical and legal considerations regarding how it may be deployed.

What is artificial intelligence?

The term AI covers a wide range of technologies that have developed rapidly. AI is commonly recognised as the activity that allows machines to learn, thereby enabling them to solve set problems and make decisions without being explicitly programmed.

AI processes involve the use of algorithms to reach a desired outcome. An algorithm is a sequence of operations on data, acting as a list of logical instructions to accomplish a particular task. AI utilises a combination of algorithms that work together and edit themselves to try and improve their operation. As such, the more time an AI tool is used, the more accurate it becomes.

However, AI does not have critical functions to interrogate its sources of information and is vulnerable to misinformation and adopting human bias from the sources it uses.

How can employers use it?

Algorithmic management uses technological tools and techniques to manage workforces remotely. The AI programme uses data collection and surveillance techniques to make decisions either fully or semi-automated.

Companies like Uber initially used this function, but it is becoming more widespread with HR services and employers utilising it to improve workplace outcomes and present new opportunities. The three main areas it is anticipated that employers can use AI include recruitment, management and review.

Recruitment

AI can be used during the recruitment process to improve the efficiency, speed and quality of the candidate selection. CVs and application forms can be automatically reviewed and sorted based on pre-determined criteria set by the employer (for example, an employer may base its criteria on academic achievement, school selection or geographical regions). The employer can also align the tool with keywords to indicate the qualification or skillset the company seeks.

Management

Enhanced efficiency within company operations and increased customer satisfaction can be achieved through AI software that manages employees' shift patterns and automates customer interactions. Amazon has utilised AI software in its warehouse with wearable devices to direct employees to the next item to collect, setting a time to do so and using vibrations to direct them to the item quicker. The orders and management of employees are performed instantaneously by the AI-operated software.

AI can also provide consistent customer support to clients with no wait times. However, it needs to be seen how far AI can replicate a human conversation and whether the replacement of human interaction would be perceived positively by both employees and customers. Customers can easily identify when they are communicating with an automated chatbot due to prescribed responses, which may devalue their view of the service.

Performance management

Various HR tasks can be automatically performed via AI software, such as:

  • Answering employee queries.
  • Using data collection to improve performance assessments.
  • Identifying employees who require additional support and automatic transcription of employee meetings.

AI software can also make decisions regarding grievances, hiring, and disciplinary procedures based on employee data in line with company policy. These tools can save a company a significant amount of time; however, it is critical that there is final human oversight to ensure the accuracy and quality of decisions. Again, employees may respond poorly to decisions on sensitive matters being made by AI. The AI is also only as good as the information supplied to it.

Potential problems with AI

Lack of transparency in the outcome

The employer cannot see how the AI has reached a certain decision or verdict on a question or analysis. The employer will not be able to view the weight the AI software has placed on certain factors or the algorithm in which it operates; neither will the employer necessarily understand it if it did. The same applies to the use of AI by employees.

Employers using AI may decide based on an AI-generated outcome, having yet to understand the process or the reasons for such an outcome. If an employee challenges a particular decision, the employer will face difficulty in explaining or justifying the decision made. This lack of explanation would also impact negatively on any Tribunal proceedings.

Employers will also need to ensure that employees are made fully aware of when AI tools are being used in respect of their data. Any deficiency in awareness will be particularly relevant when considering employee surveillance and compliance with data protection legislation.

Accuracy and human oversight

AI generally functions by increasing accuracy over time as it learns and self-modifies. As a result, there is a potential for AI to provide an inaccurate answer or an adverse decision. It is, therefore, vital that there is human oversight of AI decisions to review, fact check and approve any action taken or content used by the business. For example, two US lawyers have been subject to fines and professional embarrassment after submitting filings to court produced by AI that had entirely made up cases and references.

AI operations are made without consideration of human elements. Personal factors that a human would give weight to may be disregarded by AI software, but this will also remove any subjective personal opinions or motives behind the decision.

Accountability remains a key issue where decision-making becomes detached from human managers. Human oversight should always remain to intervene when necessary and to review AI decisions. Situations may be more complex than what the AI software is programmed to understand and require a human level of judgement.

Intellectual property

Employers should ensure that there is a full understanding of the ownership rights of AI-generated content, whether the company, the employee, the AI provider or a third-party content generator own this. AI software also learns through the use of intellectual property and copyright materials used by others, and it remains to be seen whether this is lawful and who might ultimately hold ownership of content.

Content produced using AI may be vulnerable to ownership challenges by third parties who recognise their uncredited and unlicensed intellectual property within it.

Discrimination

AI algorithms may include and hide discriminatory processes that the employer is unaware of. For example, Uber introduced AI software to ensure that only the registered drivers were using the app through facial recognition. The authentication software experienced issues in recognising dark-skinned faces, which resulted in those users being unable to access the app and find jobs. The disparity was more apparent for darker-skinned females, with a failure rate of 20.8% compared with a failure rate of 6% for males.

Amazon implemented an automated recruitment tool that unwittingly resulted in discriminatory bias against female applicants. Despite creating the algorithm on a neutral basis, the CV screening system self-modified itself to prefer male candidates due to the data collected from Amazon's employees in the preceding ten years. This resulted in CV's, including the word 'women's' being downgraded by the algorithm. Amazon abandoned the recruitment tool and experienced negative press as a result.

AI software, such as ChatGPT, is only as unbiased as the data received and does not have the critical analysis skills to recognise conscious or unconscious bias. Employees using AI will, therefore, need to ensure that they understand its data sources and that the AI software is monitored to identify potential discriminatory outcomes. Employers may have to accept that there is an inherent risk of potential discrimination with employees using AI.

Data protection

Where AI is used, a transfer of data outside of the company's organisation will occur. Employers will, therefore, need to be alert to where this data is going and how it will be stored and subsequently used. AI systems may use employee or customer data elsewhere that can easily be accessed by third parties. Open AI tools such as ChatGPT can take the information inputted by one user and supply it to another.

ChatGPT stores every conversation logged, including personal data. This carries a risk of breach of confidentiality and processing data outside of the given lawful purpose.

Data processors must act to ensure that staff are trained to recognise the data protection risk of using AI software and take security measures to protect personal data.

Summary

While AI can streamline company operations and boost productivity, employers must be aware of its potential implications. As noted above, often, the risks are unforeseen at the time the AI software is used. Employers should consider the use of AI carefully and may wish to wait for further development in the area before deciding to do so.

ChatGPT reached 100 million monthly active users just two months after launch, making it the fastest-growing consumer application in history. Employers should, therefore, be particularly mindful of its widespread use in the workplace and may be unaware of employees using it already.

To best protect itself, employers should implement a clear policy on using AI in the workplace so that employees fully understand the considerations and potential implications of generative AI such as ChatGPT. Ultimately, responsibility for any data protection breach, decision or discrimination arising from using AI software will rest with the employer, as such employers may wish to prohibit the use of AI within its policy completely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.