As firms increasingly rely on artificial intelligence-driven hiring platforms, many highly qualified candidates are finding themselves on the cutting room floor.

Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence recruiting software. Job applicants face these machine prompts – and AI decides whether they are a good match or fall short.

Businesses are increasingly relying on them. A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening “to improve recruiting and human resources”. Another 40% of respondents were considering integrating the technology.

Many leaders across the corporate world hoped AI recruiting tech would end biases in the hiring process. Yet in some cases, the opposite is happening. Some experts say these tools are inaccurately screening some of the most qualified job applicants – and concerns are growing the software may be excising the best candidates.

“We haven’t seen a whole lot of evidence that there’s no bias here… or that the tool picks out the most qualified candidates,” says Hilke Schellmann, US-based author of the Algorithm: How AI Can Hijack Your Career and Steal Your Future, and an assistant professor of journalism at New York University. She believes the biggest risk such software poses to jobs is not machines taking workers’ positions, as is often feared – but rather preventing them from getting a role at all.

 

Untold harm

Some qualified job candidates have already found themselves at odds with these hiring platforms.

In one high-profile case in 2020, UK-based make-up artist Anthea Mairoudhiou said her company told her to re-apply for her role after being furloughed during the pandemic. She was evaluated both based on past performance and via an AI-screening programme, HireVue. She says she ranked well in the skills evaluation – but after the AI tool scored her body language poorly, she was out of a job for good. (HireVue, the firm in question, removed its facial analysis function in 2021.) Other workers have filed complaints against similar platforms, says Schellmann.

She adds job candidates rarely ever know if these tools are the sole reason companies reject them – by and large, the software doesn’t tell users how they’ve been evaluated. Yet she says there are many glaring examples of systemic flaws.

In one case, one user who’d been screened out submitted the same application but tweaked the birthdate to make themselves younger. With this change, they landed an interview. At another company, an AI resume screener had been trained on CVs of employees already at the firm, giving people extra marks if they listed “baseball” or “basketball” – hobbies that were linked to more successful staff, often men. Those who mentioned “softball” – typically women – were downgraded.

Marginalised groups often “fall through the cracks, because they have different hobbies, they went to different schools”, says Schellmann.

In some cases, biased selection criteria is clear – like ageism or sexism – but in others, it is opaque. In her research, Schellmann applied to a call centre job, to be screened by AI. Then, she logged in from the employer’s side. She’d received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English, but received a poor rating for her actual relevant credentials on her LinkedIn profile. 

She worries the negative effects will spread as the technology does. “One biased human hiring manager can harm a lot of people in a year, and that’s not great,” she says. “But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants.” 

‘No-one knows exactly where the harm is’

“The problem [is] no-one knows exactly where the harm is,” she explains. And, given that companies have saved money by replacing human HR staff with AI – which can process piles of resumes in a fraction of the time – she believes firms may have little motivation to interrogate kinks in the machine. 

One biased human hiring manager can harm a lot of people in a year, and that’s not great. But an algorithm that is maybe used in all incoming applications at a large company… that could harm hundreds of thousands of applicants – Hilke Schellman

From her research, Schellmann is also concerned screening-software companies are “rushing” underdeveloped, even flawed products to market to cash in on demand. “Vendors are not going to come out publicly and say our tool didn’t work, or it was harmful to people”, and companies who have used them remain “afraid that there’s going to be a gigantic class action lawsuit against them”.

It’s important to get this tech right, says Sandra Wachter, professor of technology and regulation at the University of Oxford’s Internet Institute.

“Having AI that is unbiased and fair is not only the ethical and legally necessary thing to do, it is also something that makes a company more profitable,” she says. “There is a very clear opportunity to allow AI to be applied in a way so it makes fairer, and more equitable decisions that are based on merit and that also increase the bottom line of a company.”  

Wachter is working to help companies identify bias through the co-creation of the Conditional Demographic Disparity test, a publicly available tool which “acts as an alarm system that notifies you if your algorithm is biased. It then gives you the opportunity to figure out which [hiring] decision criteria cause this inequality and allows you to make adjustments to make your system fairer and more accurate”, she says. Since its development in 2020, Amazon and IBM are among the businesses that have implemented it. 

Schellmann, meanwhile, is calling for industry-wide “guardrails and regulation” from governments or non-profits to ensure current problems do not persist. If there is no intervention now, she fears AI could make the workplace of the future more unequal than before.

https://www.bbc.com/