In January 2025 the Government announced a plan to 'unleash' artificial intelligence (AI) across the UK to deliver systematic change. AI already plays a prominent role in workplaces across a range of sectors, with the intention being to enhance efficiency, boost productivity and improve decision making.
One area where AI is particularly prevalent is in recruitment. It's now so common for job applicants to use AI to write cover letters and CV's that some companies have started using software that detects AI use, and the company immediately disregards any applications where it's been used. Likewise for businesses, it has become common to use AI to filter applications as a way of shortlisting who to interview.
However, as we know, AI is not without its pitfalls. Reports are emerging that there are patterns of potential bias in the way AI filters information, as well as risks in data privacy and concerns over transparency. As a result, the Information Commissioner's Office (ICO) issued a report providing a number of recommendations to businesses who wish to use AI tools during recruitment.
What does the ICO say?
The ICO recognises the benefits of using AI in recruitment. They are not telling businesses not to use it. Their advice is to be careful when you do. The ICO's audit found a number of areas in data protection compliance and privacy where improvements needed to be made to avoid any risks. In particular, the ICO is concerned that some AI tools:
- are not processing personal data fairly. For example, discriminative practices by filtering out candidates with certain protected characteristics.
- are inferring protected characteristics, such as gender and ethnicity, from an applicant's name instead of asking for the information.
- are collecting more personal data than necessary, and retaining it for an indefinite period of time, and then combining that data with other information on the internet or social media to build large databases of potential candidates without their knowledge.
To address these concerns theICO has issued seven recommendations targeted at those working in recruitment and AI developers, with the focus being on fairness, transparency and accountability.
- Fairness: AI providers and businesses must
ensure they process data fairly when using AI. This includes
monitoring for actual or potential fairness, accuracy or bias
issues in AI and its outputs, and ensure appropriate action is
taken to address any issues. Where special category personal data
is being processed, they must have effective monitoring systems in
place to monitor for bias and discriminatory outputs. Any data that
is inferred or estimated by AI will not be adequate or accurate
enough, and therefore will not comply with the law.
- Transparency: Businesses must provide
candidates with clear privacy information explaining what personal
data will be processed by AI tools, how it will be processed, and
the logic involved in making predictions or using AI outputs.
- Data minimisation and purpose limitation:AI
generally requires large amounts of data, including personal data,
so it can learn to reliably replicate tasks or make decisions.
Where AI is used in the recruitment process, it is important that
only the minimum candidate personal data needed to develop and
train AI systems is used. It is equally important that this data is
not then reused for other incompatible purposes (i.e. it is only
used for the purpose it was originally obtained), and it isn't
retained for longer than is necessary.
- Data Protection Impact Assessments
(DPIAs):DPIA's must be completed early, and before any
personal data is processed through AI. These impact assessments are
designed to identify and mitigate any data protection risks before
any data processing begins. These assessments should be updated
regularly, particularly whenever your recruitment process changes
or there are changes in AI's functionality.
- Roles of Controllers and Processors: AI
providers and businesses must clearly identify whether the AI
provider acts as a controller, joint controller, or processor for
each instance of data processing, and record this in contracts and
privacy notices.
- Lawful basis for processing: Businesses must
identify and document the lawful basis for each instance of
processing personal data, and they must do so before they start to
process data through AI tools. This includes identifying any
additional conditions where they are processing special category
data.
- Explicit processing instructions: Business must provide written processing instructions for the AI provider to follow when they are processing personal data on their behalf, and they should periodically check whether the AI providers are following those instructions.
Comment
The fact the ICO has issued these recommendations shows how widespread the use of AI in the recruitment process has become, and equally shows that they have data protection and privacy concerns with how these tools are currently being used to process personal data and special category data.
The message to businesses is to think about data protection and privacy obligations whenever you are using AI tools to process personal data. The ICO's recommendations are a useful framework for ensuring you are complying with your data protection and privacy obligations, with fairness, transparency, and accountability being the key tools for mitigating any risks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.