Employers who automate recruitment need to guard against unexpected bias

HR professionals are in the front line in the battle against discrimination, working to ensure that their organisations treat employees and job applicants without bias. This battle has a long way to go.

The average white job hunter in France gets 83 per cent more callbacks after interviews than a non-white applicant, according to a study led by Northwestern University that was published over the summer. Even in the US – where hiring by large employers is subject to strict racial monitoring in an attempt to eradicate bias – white applicants get 33 per cent more callbacks, the study said.

Historically, legislation was seen as the answer. Most developed countries make discrimination illegal around protected characteristics like gender, disability, age, ethnicity and sexual orientation. However, technology is now often heralded as the saviour, particularly in relation to unfairness in recruiting.

A plethora of solutions exist, each aiming to root out unfairness in its own way. They aim to make rational, fact-based decisions more scientifically than humans, some through gamification or better psychometrics, others by using artificial intelligence (AI) to sift candidates. Some even deploy facial recognition technology that they say can discern characteristics but is blind to ethnicity.

Facial expressions

Companies turning to emerging technology include Unilever. The consumer goods giant has used software developed by HireVue to profile more than 100,000 candidates globally by analyzing the language used, tone and facial expressions of candidates against a database of 25,000 successful previous candidates.

HireVue claims its algorithms are more objective than humans. One company it worked with has already employed salespeople who make 15 per cent more sales than typically sourced hires, it says.

Rebekah Wallis, director of people and corporate responsibility at Ricoh UK, says her business has benefited from its early adoption of AI in the recruitment process: "We invested in software that uses algorithms to review CVs. This improved the candidate experience, ensuring candidates heard back quickly. Positive candidate experience can only be a good thing."

But many – including companies polled by Ius Laboris – remain deeply sceptical that technology built by inherently biased humans can ever really be bias free, or ven less biased than existing recruitment practices.

"It's a fact that humans are biased," says Ben Taylor, chief technical officer of US-based, AI-powered, automated decision- making platform Rainbird. "Add technology, though, and it's easily the case that bias is perpetuated, even magnified."

The more complex the AI, the more difficult it is for humans to know how a decision has been made, even though humans created the algorithms, says Taylor.

Such is the distrust of technology that only 6 per cent of respondents to our research thought the greatest benefit of technology was fairness. This is despite nearly half saying it is useful for making efficiencies.

Turning to technology

But there was a further interesting element to the data; although people distrust technology, a sizeable 22 per cent said they'd use it anyway, regardless of whether it risks bias.

"This reveals a paradox about technology at the moment," says Emily He, senior vice president of human capital management (cloud business) at Oracle. "It shows HR recognises its own bias, but that if technology has bias too, it's merely replacing one with the other and that's OK.

"Personally, I feel biases can be more easily surfaced with tech and, with this, HR professionals can be better positioned to do something about them and reduce them."

Oracle's use of AI-powered chatbots means jobseekers can ask more questions in advance and have a better candidate experience. She claims this makes people opt to go forward with the process, because they have more trust that the system will be fair.

Unlike at Amazon, which suspended one AI-based recruitment project because it profiled future successful candidates against those it already had and therefore excluded women, Emily He says Oracle's technology is agnostic of background. For instance, she's found that solutions engineer roles tend to go to former product marketers, not something Oracle had initially expected.

But even so, is there ever a justification for using technology known to perpetuate a bias – for example, to improve time-to- hire rates – on the basis that the elimination of bias is impossible?

"In recruitment there is always a balancing act between speed of hire and accuracy of hire," says Caroline Smith, Deputy General Counsel, International at background-check specialists HireRight. "There are now specific laws around how organisations automate their processes, for instance not filtering against protected characteristics, and to comply with local discrimination and data handling laws, such as the General Data Protection Regulation within the EU."

National idiosyncrasies

Organisations, especially global ones, also need to think about how national idiosyncrasies could render any one-size-fits-all technology solution wrong.

"In most companies, part of a normal screening process is to screen people's addresses," says Smith. "In Japan, though, where organised crime groups exist, address matching is not permitted, as it could lead to someone who innocently lives in a criminalised area being unfairly excluded. In Israel, laws were recently passed preventing recruiters from filtering against those with a criminal background, because that is now deemed to be discriminatory."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.