ARTICLE
27 August 2024

GenAI Bots Get A Big Say In Hiring

SR
S.S. Rana & Co. Advocates

Contributor

S.S. Rana & Co. is a Full-Service Law Firm with an emphasis on IPR, having its corporate office in New Delhi and branch offices in Mumbai, Bangalore, Chennai, Chandigarh, and Kolkata. The Firm is dedicated to its vision of proactively assisting its Fortune 500 clients worldwide as well as grassroot innovators, with highest quality legal services.
In the past, recruitment was a straightforward process where employers relied heavily on face-to-face interactions and manual assessments to evaluate potential candidates.
India Technology

Introduction:

In the past, recruitment was a straightforward process where employers relied heavily on face-to-face interactions and manual assessments to evaluate potential candidates. The reliance on technology was to aid human interface to streamline the processes further. However, in the rapidly evolving landscape of technology aided recruitment, Artificial Intelligence (AI) has emerged as a game-changer.

Companies across various sectors are increasingly leveraging AI tools, including generative AI, to enhance reducing reliance on human involvement their recruitment processes.

  1. Improved quality of hire
  2. Automate tedious manual tasks
  3. Better experience for candidates
  4. An optimized recruitment process
  5. Cost-effective hiring
  6. Reduced time to hire
  7. No more 'talent waste'1

A recent study by SHRM shows that approximately 26% of organizations use Artificial Intelligence to support Human Resource-related activities. Employers increasingly are turning to AI to perform tasks that previously were completed by Human Resource (HR) professionals."2

According to a report from Market Research Future, the global AI recruitment market is forecasted to reach $942.3 million by 2030, with an expected compound annual growth rate of 6.9% from 2020 to 2030.3

AI in Recruitment: Practical Examples4

  • Genpact's IMatch:
    A notable example is professional services firm Genpact, which recently launched IMatch, an in-house generative AI-based resume parsing and job-matching engine. By integrating AI into its hiring processes, the company has significantly enhanced efficiency. 40% of the new hires have been sourced, leading to a notable 15% boost in recruiter productivity. Additionally, the average hiring timeline has been reduced from 62 days to just 43 days, streamlining the recruitment process substantially.
  • Simplilearn:
    In the Edtech sector, Simplilearn has been using ChatGPT and other AI tools for over a year to optimize job descriptions, create proficiency assessments and conduct psychometric tests.
  • Peoplefy:
    A Recruitment services provider, Peoplefy employs GenAI-based tools to customize recruitment processes such as sending personalized mass mailers tailored to each candidate's unique experiences and background.
  • Welspun Enterprises:
    Welspun Enterprises, an infrastructure development firm, utilizes a generative AI bot to assist executives during interviews. This has drastically improved hiring efficiencies, raising the selection ratio from 15% to 55%.

Challenges AI Recruiting may face:

"When people talk about AI, both the proponents and the detractors like to mystify it as a silver bullet or something that is inherently evil. Bias can be a real problem. A lot of AI is being used to make all kinds of automated decisions. It's important that we keep it trustworthy and fair, transparent and responsible," Peter van der Putten, director of decisioning and AI solutions at Pegasystems agreed. 5

In today's HR and recruitment sector, organisations are increasingly leveraging AI –enabled technologies across every stage of the recruitment process, including sourcing, screening, interviewing and selection. But they fail to recognize that AI models are inherently biased because they learn from data that contains biases. If an algorithm is trained on employee profiles that contain attributes correlated with demographic characteristics, it may inadvertently favor or disadvantage particular groups. For instance, an AI recruitment tool trained on profiles of predominantly male employees might be more likely to recommend male candidates over equally qualifies female candidates, perpetuating gender imbalances in the workplace.

The underlying issue in such cases is that AI models learn from historical data, which often reflects existing biases and inequalities.

Discrimination/Biases:

While these AI technologies offer significant efficiencies and insights, but they do present risks of bias and may perpetuate historical patterns of discrimination, based on racial or ethnic origins, genders, disabilities, age or sexual orientation, etc. leading to unfair recruitment. Furthermore, there is a substantial risk of digital exclusion for individuals who may lack proficiency or access to technology due to age, disability, socioeconomic status or religion.

For instance, General-purpose chatbots, which are not specifically designed for recruitment, may be inappropriate for hiring tasks. These chatbots might lack the necessary training on relevant or comprehensive data, leading to incorrect or potentially illegal responses. This not only jeopardizes the recruitment process but can also misguide candidates towards other opportunities, including those with competitors.

Headhunting software, while powerful, can further exacerbate biases, especially if there are pre-existing notions of an ideal candidate for a position. Automated screening tools often trained on historical data, risk inheriting biases from past recruitment practices and may rely on proxy indicators for success that are irrelevant to the job at hand.

For instance, in 2014, Amazon developed an AI recruitment tool to streamline their hiring process. It was found to be discriminatory against women as the data used to train the model was from past 10 years, where most selected applicants were men due to the male dominance in tech industry.6 The tool favored resumes with terms more commonly associated with male candidates and downgraded resumes that indicated the word "women's" as in "women's chess club caption. Thus, this system was scrapped in 2018.

In August 2023, a Chinese based tutoring company iTutor Group agreed to pay $365,000 to settle a lawsuit brought by the US Equal Employment Opportunity Commission (EEOC), claiming it used hiring software powered by artificial intelligence to illegally weed out applicants of women aged 55 or older and men who were 60 or more. 7

Facial Recognition

AI technologies employing facial recognition in video interviews lack scientific consensus on the validity of inferring emotions from facial expressions, further complicating the reliability of these technologies in recruitment. Relying heavily on AI could reduce the human element in hiring, making the process feel impersonal for candidates. For instance, creative roles often require assessing a candidate's portfolio and creative thinking, which AI might not evaluate effectively.

According to a 2019 report from LinkedIn, 91% of talent professionals say soft skills are very important to the future of recruiting and HR; however this is something AI and ML cannot access. Qualities such as communication, time management, organisation, problem-solving, critical thinking and interpersonal skills are human skills and can only be assessed by another human, but AI. 8

What data does AI driven models collect in recruitment and how does it result in automated decision making?

AI-driven models used in recruitment process gather data from various sources such as resumes, cover letters, profiles and online assessments. This collected data is then processed and standardized. Machine learning algorithms analyze the data collected to identify patterns, match skills to job requirements and predict candidate suitability. Based on this analysis, the AI system makes decision such as shortlisting candidates, ranking applications or even providing job offers without human intervention, thereby leading to automated decision making.

Given that the data is the new oil, employers must prioritize data privacy and security to protect candidate's personal information. Article 22 of the GDPR specifically addresses automated decision making and profiling. It grants individuals certain rights and imposes obligations on organisations that use automated decision making systems.

Individuals have the right not to be subject to decisions based solely on automated processing, including profiling, which significantly affects them. This includes recruitment decisions made without human involvement.

However, there are exemptions where automated decision-making is permissible such as:

  • When it is necessary for entering into or performing a contract;
  • When it is authorized by Union or Member State Law
  • When it is based on explicit consent from the individual9

There are numerous examples of AI resulting in Automated Decision-Making leading to a legal effect for the person concerned. For instance, AI is used in credit scoring, where algorithms analyze a user's online behavior, numerous personal data points to determine creditworthiness. To know more about credit scores and the data privacy issues that crop up due to credit scoring activities, please refer to our article titled "Credit Score calculation and Data Privacy concerns"10

In addition, the EU Artificial Intelligence (AI) Act, which came into force on August 01, 2024, stands as the world's first comprehensive AI regulation that aims to govern the risks of AI systems and protect fundamental rights of EU citizens, which has come into force on August 01, 2024. This legislation is intended to encourage the development of AI that is both human-centric and trustworthy, ensuring robust protection protection of health, safety, fundamental rights, democracy and rule of law against potential AI-related harms. Simultaneously, the Act seeks to foster innovation and maintain the effective operation of the internal market.

The EU AI Act classifies AI systems, according to different risk categories. The law specifies that AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work related relationship promotion and termination of work-related contractual relationships for allocating tasks on the basis of individual behavior, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should be classified as 'high-risk', since those systems may have an appreciable impact on future career prospects, livelihood of those persons and workers' rights.

To know about what these categories are, kindly refer to our article titled "EU Parliament Gives Final Nod to Landmark Artificial Intelligence Law"11

Indian Advisory on usage of AI

The Indian government has been active in addressing the challenges posed by the increasing use of generative AI models. On March 15, 2024, the Ministry of Electronics and Information Technology (MeitY) has issued an updated advisory on the subject of use and deployment of artificial intelligence tools, aimed at balancing the risks and regulations associated with AI models. The Advisory directs all intermediaries and platforms to ensure that use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the Information Technology Act, 2000 and other laws in force.12

The Advisory further states that intermediaries and AI developers should ensure that AI models do not permeate bias or discrimination or threaten the integrity of the electoral process. 13

The Advisory further mandates that Generative AI models or further development on such models should be made accessible to users in India only after clearly labeling the possible fallibility or unreliability of the output generated. Additionally, 'consent popup' or similar mechanisms must be implemented to explicitly inform the users about these potential issues.Furthermore, all intermediaries and platforms are required to inform their users through their Terms of Use and User Agreements about the consequences of dealing with unlawful information. This includes the possibility of disabling access to or removing such information, suspending or terminating the user's access or usage rights to their accounts, and potential legal consequences under applicable law.

Conclusion: Fairness, Transparency and business growth

Addressing these concerns requires a balanced approach. Organizations must ensure that AI tools are designed with fairness and transparency in mind. This involves regular audits of AI systems to detect and correct biases, as well as maintaining a level of human oversight to ensure that the final hiring decisions are both fair and holistic. Additionally, fostering an inclusive culture where AI is seen as a toll to augment human capabilities rather than replace them can help mitigate the depersonalization concern.

In conclusion, the recruitment process has come a long way from the days of handshakes and paper resumes. While AI driven recruitment tools offer numerous benefits, it is crucial to navigate the challenges they present thoughtfully. By leveraging the strengths of AI while remaining vigilant about its limitations, organizations can create a more efficient, fair and personalized hiring process.

Footnotes

1 https://harver.com/blog/benefits-ai-in-recruiting/

2 https://www.lanepowell.com/Our-Insights/279402/Oz-Looks-a-Little-Scary-as-State-and-Federal-Law-Look-to-Regulate-the-AI-Wizard-in-Employment

3 https://ellow.io/top-ai-in-hiring-statistics/#:~:text=According%20to%20a%20report%20from,6.9%25%20from%202020%20to%202030.

4 https://economictimes.indiatimes.com/jobs/mid-career/the-rise-of-ai-in-recruitment-process-how-companies-are-using-artificial-intelligence-for-hiring/articleshow/110427579.cms?from=mdr

5 https://www.destinationcrm.com/Articles/Editorial/Magazine-Features/Tips-for-Battling-Bias-in-AI-Based-Personalization-147143.aspx

6 https://www.reuters.com/article/idUSKCN1MK0AG/#:~:text=Insight%20%2D%20Amazon%20scraps%20secret%20AI%20recruiting%20tool%20that%20showed%20bias%20against%20women,-By%20Jeffrey%20Dastin&text=SAN%20FRANCISCO%20(Reuters)%20%2D%20Amazon,engine%20did%20not%20like%20women.

7 https://www.reuters.com/legal/tutoring-firm-settles-us-agencys-first-bias-lawsuit-involving-ai-software-2023-08-10/

8 https://www.oakstone.co.uk/new-blog/ai-in-recruitment

9 Refer to Article 22 of GDPR: https://gdpr-info.eu/art-22-gdpr/

10 https://www.barandbench.com/law-firms/view-point/credit-score-calculation-and-data-privacy-concerns

11 https://ssrana.in/articles/eu-parliament-final-nod-landmark-artificial-intelligence-law/

12 https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf

13 https://ssrana.in/articles/why-free-and-fair-elections-in-2024-is-a-challenge/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More