Artificial intelligence can be a useful tool to improve efficiency within HR departments, but it doesn't come without risks. Employers must consider the potential issues and legal liability that may come with using this technology, particularly when it comes to perpetuating bias.
Employers Legally Obligated to Eliminate Discrimination
In Manitoba, The Human Rights Code prohibits employment-related discrimination, which also applies to the recruitment process. In the Code, discrimination is defined as differential treatment of an individual on the basis of various characteristics such as: ancestry, nationality, ethnic background, religion, age, sex, gender identity, sexual orientation, marital or family status, source of income, political belief, physical or mental disability, and social disadvantage.
If someone is treated unequally to others because of these characteristics, or because they belong (or are perceived to belong) to a certain group, then this may be considered discrimination.
Under the Code, employers have a responsibility to eliminate discrimination in every part of their hiring practices, regardless of the tools and platforms that they use.
AI & Hiring Practices
As AI continues to be implemented in many industries, it is no surprise that human resources departments are considering how to effectively use AI as part of their hiring practices.
A recent poll found that 52% of the 505 Canadian hiring decision-makers surveyed used AI in their hiring practices, and 73% believed that there were benefits to using AI during the hiring process. Hiring managers cited various positive impacts from using AI, including enhanced customer service and improved process efficiency.
AI Tools for HR Professionals
As AI continues to be integrated into various platforms, HR departments now have many different options to assist them in their responsibilities — even a quick Google search will reveal the large number of AI tools that are available to employers.
For example, HR departments can implement AI-driven programs to screen applications, rank candidates, schedule interviews, and even conduct video interviews.
In addition, certain tools may be used to filter through applications based on data sets to identify and rank the most suitable candidates for a certain position. These tools may also continuously learn from an employer's hiring practices to improve its screening and ranking of candidates.
Risk of Discrimination in Hiring Practices
While the use of these tools may be helpful, they may also introduce the potential for discriminatory hiring practices, particularly when employers use AI tools to screen applications and rank candidates. Like many AI programs, these tools may inadvertently introduce biases into your hiring practices.
AI tools, such as those referenced above, are trained on data sets. If those data sets contain biases, it is possible that the AI tool will continue to reinforce these biases in ways that could be discriminatory.
In fact, a recent article from the Financial Post noted that AI may perpetuate bias in an employer's hiring practices by making inferences based on certain characteristics, including characteristics that are protected by the Code. For example, if a candidate is rejected by an AI tool based on their gender identity, that would likely be considered discrimination under the Code.
Best Practices for HR Professionals Using AI
Below, we have presented a few best practices to help keep HR departments from discriminating against candidates through potential bias based on AI tools. Although this is not an exhaustive list, these practices are a good starting point to assist employers in improving their hiring practices, leading to a stronger workforce and reduced legal liability.
- When you select an AI tool for implementation in your organization, consult the vendor for more information on how the program combats bias in its processes. Make sure to document your discussions with the vendor regarding bias.
- Conduct and document regular reviews of the data sets that the AI tool uses to make decisions regarding candidate screening and ranking, in order to make sure that the data sets are free from discrimination based on characteristics that are protected by the Code.
- Conduct and document regular reviews of the candidates that the AI tool recommends to you to ensure that it is not discriminating against one group of applicants based on characteristics that are protected by the Code.
- Create policies for checks and balances that your organization will use to ensure that HR team members have the opportunity to regularly monitor the use of AI tools in your hiring practices.
- When it comes to discrimination claims, an employer will often be called upon to prove that the decision was made for bona fide reasons and was not discriminatory. If you are using an AI tool in your hiring practices, ensure that you have access to data on how the AI evaluated the various candidates in case it needs to be used as evidence.
AI's Risks and Rewards
While AI tools, such as those that assist HR departments with hiring practices, can be an excellent way to improve efficiency, they can also present risks for employers. In particular, these tools could unknowingly perpetuate biases in your hiring practices, which may be considered discrimination under the Code and expose your organization to legal liability.
Eliminating bias in your hiring practices is not only sound advice for reducing liability — it will also lead to a more diverse workforce, which is necessary to ensure that your organization is successful, balanced, and equitable.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.