UK Government Publishes Guidance On Responsible Artificial Intelligence In Human Resources And Recruitment

DSIT has outlined key considerations to be considered by organisations seeking to procure and deploy AI in the recruitment lifecycle...
UK Technology
To print this article, all you need is to be registered or login on

DSIT has outlined key considerations to be considered by organisations seeking to procure and deploy AI in the recruitment lifecycle

The Department for Science, Innovation & Technology (DSIT) has released guidance to assist organisations that use artificial intelligence (AI) in recruitment to ensure that they are adhering to the UK government's AI "high level principles" for the regulation of AI.

The government, per its AI white paper of March 2023 and consultation response of February 2024, will not be introducing new regulation on AI as it stands. Instead, existing UK regulators will apply their existing powers within their respective jurisdictions guided by the "high-level principles".

While there is no overarching "regulator" for general workforce or employment matters, DSIT's guidance addresses any subsequent lack of clarity on how the high-level principles apply to AI used for recruitment. Specifically, it outlines how organisations should adopt AI assurance mechanisms to support the responsible procurement and deployment of AI systems in HR and recruitment.

Considerations before procurement

The guidance initially details key considerations for procuring AI systems, including understanding their purpose, desired outputs, functionality, how the system will integrate with existing organisational processes and its impact on employees.

The guidance specifies that organisations must consider whether any AI tool procured with the intention of integration into the recruitment process is compliant with their obligations under the Equality Act 2010. It states that employers should ask themselves whether introducing the technology into the recruitment process:

  • creates new barriers to applicants with protected characteristics;
  • amplifies existing risks; or
  • creates novel risks.

Employers should consider whether their use of AI in recruitment is automated processing which falls within the scope of Article 22 of the UK GDPR. If it is, employers must complete a data protection impact assessment (DPIA).

Assurance mechanisms before procurement

The DSIT guidance sets out various assurance mechanisms for employers to address these considerations. It recommends creating an AI governance framework that outlines how it will be embedded in and complement existing business functions. The framework should also detail methods for escalation, assign accountability for AI tools, and explain how the organisation will address user feedback.

It encourages the implementation of algorithmic impact assessments and equality impact assessments as processes to anticipate the wider effects of AI tools on environmental, equality, human rights, data protection, or other outcomes. Additionally, comprehensive data protection impact assessments are required for all development and deployment of AI systems involving personal data to identify and minimise associated risks.

Considerations during procurement

The guidance emphasises that organisations ought to understand the functionality of AI tools, including interactions with internal processes.

It recommends seeking evidence to support claims made by suppliers during procurement, requesting documentation including impact assessments and/or DPIAs at this stage. Sharing these assessments promotes transparency regarding potential risks posed by AI systems, of which organisations should remain aware.

Assurance mechanisms during procurement

The guidance suggests several assurance mechanisms for organisations to consider:

  • A bias audit can be conducted regularly to assess algorithmic systems for any bias in input data or outcomes.
  • Performance testing can be used to evaluate the precision and accuracy of the AI model, with specific metrics depending on the model type.
  • Risk assessments should be conducted alongside impact assessments to identify and mitigate potential risks.
  • Developers should provide model cards, which contain important information about the AI model, to purchasing organisations.

Implementing these assurance mechanisms will help organisations make informed decisions, understand model characteristics and limitations, and demonstrate reasonable implementation of AI tools.

Considerations before deployment

DSIT recommends conducting a pilot of AI tools within the organisation. This pilot should involve a diverse range of users, including employers and affected communities such as job seekers.

During this phase, employers should be vigilant and consider factors such as employees' correct usage of the system and identifying potential misuse. Assessing the performance of the AI model against equalities outcomes is important, as it may exhibit varying levels of performance based on deployment environment.

The guidance highlights the sources of bias in AI systems, including learned bias and inaccuracy.

Employers should also plan for reasonable adjustments, ensuring that the AI system does not disadvantage applicants with disabilities during the interview process. If the system cannot accommodate reasonable adjustments, it should be removed from the process.

Assurance mechanisms before deployment

Performance testing is an assurance mechanism recommended in the guidance, typically conducted during the pilot phase to assess the AI model's performance in the organisation's real-world environment. If the organisation lacks in-house technical expertise, the supplier should be responsible for conducting performance testing.

Training and upskilling employees are crucial for effective use of the new system, and the guidance suggests compiling training resources for future reference. Regular impact assessments, including pre-deployment assessments, will help identify potential risks and assess the system's actual impacts.

Transparency is essential in using AI systems during recruitment, and applicants should be informed about their use, including limitations and the possibility of contesting decisions. This enables applicants to request reasonable adjustments and make informed decisions.

Live operation considerations

To ensure the effective performance of an AI system, the guidance advises organisations to monitor and evaluate its functioning regularly. This ongoing monitoring helps identify any issues that may affect the system's performance. Additionally, organisations should provide avenues for individuals to provide feedback and seek redress, as regular monitoring may not capture all potential harms.

By implementing these practices, organisations can proactively address any issues, promote transparency and ensure the continuous improvement of their AI systems.

Assurance mechanisms for live operation

The guidance encourages organisations to use iterative performance testing as an assurance mechanism.

This involves conducting repeated tests on the system's performance using supplier-provided documentation to stay informed about changes and assess effectiveness. Iterative bias audits are also recommended to identify and address biases that may arise during system operation.

Establishing a user feedback system is essential, allowing employees and applicants to report issues encountered while interacting with the AI system. The feedback system should provide options for detailed descriptions, severity indication, and whether it prevented further system use.

By implementing these mechanisms, organisations can continuously monitor performance, address biases, and gather valuable user feedback for improvement.

Osborne Clarke comment

Employers using AI in recruitment should be aware of these principles. Although compliance with the guidance is not mandatory, the principles outlined within it will feed into enforcement by regulators such as the Equality and Human Rights Commission and the Information Commissioner's Office (ICO). While the guidance primarily focuses on AI in recruitment, many of its recommendations are applicable to the use and procurement of AI throughout the entire employment lifecycle.

The potential risks that AI systems used in relation to recruitment (and at other points in the employment lifecycle) can pose to the rights of individuals at work are reflected in their inclusion in the "high risk" category under the EU's AI Act.

As well as being aware of and implementing (as appropriate) this guidance, employers using AI at any stage in their employment lifecycles should, in particular, consider:

  • Completing due diligence before implementing any AI tools, including understanding the training data used to build the model, to fully understand the risks.
  • Encouraging HR to be open with candidates about how such technologies are used and the review mechanisms in place, to ensure transparency and reduce the risk of negative backlash.
  • Implementing some form of regular human review in relation to the results produced by AI tools, maintaining the "human" part of human resource functions.
  • Putting in place policies clearly setting out what the expectations are around AI use in connection with work – acceptable use of technology policies should be updated.
  • Listening to employee feedback and addressing any concerns or issues that arise. Providing ongoing support and training to ensure employees feel comfortable working with AI tools.
  • Reviewing their internal AI strategy and deciding on the steps required to align their use of AI tools with the emerging regulatory frameworks.

A number of UK regulators have published their strategic approaches to regulating AI, as required by DSIT in the white paper consultation response. Employers should monitor the activity of the regulators that are relevant to their businesses.

In particular, the ICO stated in its AI strategy report that it will publish a report in 2024 following "a series of engagements" with a number of providers of AI recruitment solutions. This report, combined with the guidance, will hopefully set out clear expectations for developers and users of recruitment AI systems.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More