- with Senior Company Executives, HR and Finance and Tax Executives
- with readers working within the Utilities and Law Firm industries
As organisations increasingly use AI tools for a range of process improvement purposes, a range of human resources-specific use cases have emerged — including transcribing interviews and performance meetings and drafting job adverts and annual reviews, among others.
A common client question is whether these tools, when used in connection with making decisions about hiring, firing and workforce performance and management, constitute a 'high-risk' AI system for the purposes of the EU AI Act. Generally, the answer is "no" — but as with European Union law generally, and the AI Act specifically, things are not always so clear-cut.
High-Risk HR Systems
Annex III(4) of the AI Act categorises as 'high-risk' AI systems that are used:
- "For the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates."
- "To make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individuals' behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships."
Some AI systems will, or are likely to, fall within the Annex III(4) definitions.
- In relation to the recruitment and selection of candidates, these include: (1) automated CV and resume screening tools; (2) AI-enabled candidate scoring systems (e.g., through the use of psychometric analysis); (3) interview chatbots; and (4) algorithms that target job adverts at specific types of individuals.
- In relation to work-related decisions, these include: (1) task allocation systems that assign work based on (for example) predicted productivity or outcome; (2) performance monitoring dashboards that score employees (e.g., based on reliability or efficiency); (3) systems that recommend or decide on promotion or termination; and (4) in the context of so-called gig or platform work, algorithms that rank and allocate tasks to contractors.
Other AI systems will not, or are unlikely to, fall within the Annex III(4) definitions.
These include those that are administrative — i.e., not evaluative — in nature, such as payroll systems, time-tracking tools (provided they are used solely to record attendance), leave management software, performance dashboards and analytics tools that are not used to make individual decisions (e.g., modelling company-wide performance and conducting organisational planning).
Assessing AI Systems: Where To Start?
Beyond the clear-cut use cases, there are shades of grey.
The first step in assessing a new or existing AI system is to ask whether it determines or materially shapes the relevant recruitment, selection or performance evaluation decision. If yes, the system is likely to be classified as high-risk for the purposes of Annex III(4). But if the intended use is to assist a human reviewer, and not to determine the outcome of a decision, there are reasonable grounds to conclude that it is not a high-risk system.
- Purpose and Functionality: Does the tool serve a purely administrative or assistive function (e.g., drafting text, generating summaries or otherwise facilitating documentation)?That is to say, does it assist managers in composing content, without producing or influencing evaluative or other outcomes in relation to promotion, termination or pay?
- Automated Evaluation or Profiling: Does the tool perform automated analysis of, or draw inferences from, an individual's characteristics, behaviour or traits (e.g., leadership potential, motivation, engagement level or 'flight risk'?). If yes, are these outputs determinative or influential in employment-related decisions?
- Use of Historical or Predictive Data: Does the tool use training data from past performance reviews or workforce analytics to predict or simulate future performance or behaviour? In particular, does it produce scores, rankings or classifications based on those data — or are the AI system outputs used to inform or justify employment decisions?
- Human Oversight and Design Control: Is there meaningful human oversight over the AI-generated outputs? For example, does a manager or other individual review, edit and sign-off on the decision or action? Importantly, an AI system could in some cases be caught by Annex III(4) even where a human makes the final decision — for example, if the manager rubber-stamps the AI output or organisational practices otherwise treat the output of the system as authoritative.
What To Do — And What Comes Next?
In many cases, AI tools that are purely administrative or assistive will fall outside the high-risk classification under Annex III(4) of the AI Act. However, the boundary can be fluid between a tool that is assistive and one that is determinative, and will be influenced both by technical functionality and how the tool is used in practice. With that in mind, what should organisations do to ensure that such tools remain low-risk?
- Clearly define and implement guardrails — e.g., through policies and training — to ensure that administrative tools are used only to support documentation or communication tasks, and not to evaluate, score or decide on applicant or employee outcomes.
- Avoid 'feature creep, whereby a tool's summarisation or drafting functions — or their use by the HR team — start to influence recruitment, promotion or performance decisions.
- Maintain records of intended use, design choices and ongoing oversight processes to demonstrate and ensure that administrative tools continue to be assistive and non-evaluative.
The European Commission is expected to issue guidance on the high-risk classification of AI systems by 2 February 2026, which will hopefully address some of the grey areas relating to Annex III(4) — particularly how the intent, functionality and use of such systems should be assessed in practice. Until then, organisations should proceed thoughtfully and with care, by clearly documenting their intended uses of process improvement systems, maintaining strong human oversight of those systems, and being ready to adjust their practices — if needed — when the Commission's guidance is published.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.