Imagine getting your resume and job application rejected before it met the eye of a human. With the advent of AI, this is not just a prophecy but a concurrent phenomenon. As they call it the age of AI, indeed, the job application you sent perhaps is currently being assessed by AI. A heartless robot incapable of innate empathy rejecting a man soughting the only way to feed his family? Is this the reality and how it's supposed to be? And if it is, how good is AI in its job or are there lacunas in it?
AI is a large language model (LLM) which we train by feeding large and diverse data sets. Its intelligence processes algorithms to learn from patterns and features from the provided data sets. These models are responsive and capable of processing tasks given to them as a result of extensive supervised training with a behemoth of datasets. These models' targeted functions depend on the specific datasets provided for training. Hence, LLMs like OpenAI can generate texts and images according to our demand.
With the versatility of AI, it is promising in the recruiting process for a company. Nowadays, the use of AI chatbots is popular among companies, for it helps in prompt communication with the applicant. AI also outreaches candidates for a particular position using their LinkedIn ranking.
The hiring leader of Genpact - an american multinational technology company- Rittu Bhatia says that AI tools have made the hiring process touchless till the interview stage covering 40% of its new hires. Use of AI has resulted in a 15% increase in recruiter productivity, and an improvement in the speed to hire from 62 days to 43, Bhatia added.
Hilton Hotels & Resorts implemented an AI-enabled screening tool and saw its time-to-hire drop from 42 days to just 5 days, an 88% decline. L'OreĀ“al used AI-enabled screening tools and the time to review a resume dropped from 40 minutes to 4 minutes, a reduction of 90%. Hotel companies such as Hilton are constantly trying to find and hire staff. If Hilton can make an offer to a housekeeping job candidate in 5 days and its competitor takes 42 days, it is a loss for the latter in this battle.
This efficiency manifested by AI tempts many companies to put their pedal to the metal. Many executives are moving forward with Chatgpt. But this is where judgment, in the face of opportunity, plays a crucial role in caution and daring. In 2017, Amazon ended its AI-based candidate evaluation tool because it was shown to discriminate against female candidates, assigning lower scores to resumes of women when ranking applicants. The model's bias was a result of under-representation of female applicants in the training dataset used to create the model - a prime example of how biases are formed in AI - as biased algorithms are carried over to the model.
The prime cause of biases is due to biased training data where the data is a skewed sample in which proportionately more records of a particular group achieving a particular outcome versus another is present. If a manager built a simple classification model using AI to label a job candidate as "good for job", the manager may miss multiple factors that label for a good candidate and the model's prediction may not be fit for the role and ultimately losing potential assets to the company. Specifically, factors like person-job fit, person-environment fit, employee motivation and others play a key role in determining how properly the candidate fits for the job environment.
To mitigate such bias predictions by AI, companies can use various toolkits that promote fairness in the AI training itself. For instance, AIF360 (AI Fairness 360) is a toolkit developed by IBM that facilitates bias mitigation algorithms and fairness metrics to be implemented in the models for hiring. But this solution isn't as easy as it sounds, for the toolkit must be invested by the company or a third party must look after it which is another financial and security concern that may come as a burden.
AI does seem to truly help in screening and selecting applicants in large volume for a workforce of the company like interns, but choosing a candidate for an esteemed role for the company that requires more than just qualifications and skills is a tough call. It is plausible to think that AI cannot read between the lines.
Believe it or not, in some companies AI is used as an interviewer too. An interview, a crucial part of hiring, is the human touch of the recruiting process. The AI-enabled tool should not be involved in the interview, not because it isn't capable of making effective evaluations, but candidates cannot themselves evaluate the company without talking to the company people or predict the environment they will be working in. The company ought to consider the fact that employees are selecting them as much as they are selecting the employees. The employees are definitely not going to spend time with chatbots in their jobs but indeed, are going to socialize with people and get to know the cohesiveness of the organization.
But there are big organizations like Unilever that keep AI as an interviewer. The video-recorded interview with AI was provided by HireVue, an AI human resource management technology. This allowed the candidates to take the interview in their convenient time as it eschewed the hours spent on scheduling interviews. The system analyzed the candidate's voice tone, choice of words and micro facial movements that would emulate Unilever's successful employees. But where does this strategy, that seems great, fall short?
AI interviewers aren't smart enough to comprehend different faces of the candidates, and the color of their skin, interpret body language of a neurodivergent person, and recognize the different speech patterns where the accent could be heavy or unique.
AI is growing, getting stronger second by second, but in recruiting, its prematureness takes residence. It is still premature in its ability to read between the lines and recognize all kinds of people, for it overlooks some qualities of a candidate that can only be seen by recruiters themselves. It can be improved and do wonders as a different board of qualified members is necessary to control it and set models with proper training data sets. In addition, companies must have committees that are responsible for addressing governance, regulation, risk and security of AI. But at the end of the day, it may have greater intelligence but it still isn't a human.