Job hunters whose applications are being rapidly knocked back are airing their frustration as experts warn of the discriminatory risks of relying too heavily on AI for hiring.
In a LinkedIn post last week, Victoria-based human resources professional Leighan Morrell said that after being rejected within two hours of applying for a job, she suspected the company involved was using AI in its hiring process.
“I am absolutely shocked and baffled,” she said. “I applied for the role at 1.13pm today, it is now 3.17pm, and I have just been rejected from the role. How can a recruiter review my application along with all the others and reject me within two hours?”
Morrell said her experience matched the job description “to a tee” and that it was obvious the business was using AI.
“In fact, I have more experience that the role required,” she said in the post. “I think it took me longer to write the application than it did for them to reject me. It is my new record for a rejection, previously it was five hours.”
Companies are increasingly relying on AI throughout the hiring process, from scanning and screening CVs to conducting one-way chat and video interviews.
Sapia, a long-running AI interview platform, has conducted 9 million AI interviews for clients including Qantas, Woolworths, Bunnings and other major employers. It claims its interview platform improves the experience for hiring teams and potential hires, with a 95 per cent satisfaction rate among job seekers.
Interviewees conduct a text chat with an AI agent, which provides details on the company and position, describes how data from the chat will be used, warns against copying or generating text, and asks five questions – such as “describe a time you overcame a challenge” or “where do you see yourself in five years?” – that the interviewee is encouraged to answer using 50 to 100 words with no time limit.
Sapia founder Barb Hyman says the results from the platform are more equitable because it doesn’t capture demographic information such as age, gender or appearance, drawing only from the responses in the interview.
“In hiring, particularly high-volume hiring, there’s just a huge amount of bias,” she says. “Sapia is built on fairness. Everyone gets an interview, everyone gets to share their story of who they are, in their words and in their own time. There’s no personal data. It’s a truly blind, fair way to evaluate someone.”
Australian HR Institute chief executive Sarah McCann-Bartlett says while some candidates are frustrated by the increasing use of AI by employers and getting screened out, it was also being used purposefully by firms looking to reduce bias.
“Interestingly, some employers are now stripping out personal information from CVs and applications, using AI, so that those making the decision don’t show unconscious bias based on gender, where the candidate lives, how old they are, or their ethnicity,” she says.
However, Hyman acknowledges that while there are “very mature” clients at Sapia with high levels of understanding about how to make responsible decisions with AI, many are still new to the technology.
“There’s a whole 90 per cent of the market, I’d say, that are just buying without necessarily that level of scrutiny,” she says.
Australian Services Union national secretary Emeline Gaske says workers are deeply concerned about AI’s role in the hiring process for good reason.
“We know algorithms are riddled with bias and prejudice. That’s why we need human judgment, not just machines making recruitment decisions.”
Adelaide University associate professor in human resource management Connie Zheng, who has studied the use of AI in hiring, says there is still a clear need for human oversight and legal guardrails.
“We found organisational guidelines and legal requirements such as non-discriminative human resources policies, are more effective [than AI] in improving diversity and inclusion,” she says, along with having HR managers who are conscious of diversity. “We found AI didn’t make much of a difference.”
Research by University of Melbourne lawyer Natalie Sheard found employers using AI hiring systems to screen and shortlist candidates risked engaging in “algorithm-facilitated discrimination”.
The limited data used to train these systems can solidify traditional forms of discrimination by failing to reflect the diversity of the population, she says, as well as creating new forms of discrimination and paving the way for intentional discrimination. Sheard says algorithm-facilitated discrimination is especially problematic because the predictions and outcomes generated from these systems are often difficult to contest and the processes they use tend to be opaque.
The federal government has pledged $30 million to establish an AI Safety Institute aimed at monitoring, testing and sharing information on emerging AI uses, risks and harms, and in December, introduced a national AI plan setting out voluntary guardrails in the adoption of generative AI.
However, Sheard says the government needs to review and reform discrimination laws to adequately protect job seekers.
“If we do not want disadvantaged groups to be subject to algorithm-facilitated discrimination, we need to take urgent action,” she said.
Get workplace news, advice and perspectives to help make your job work for you. Sign up for our weekly Thank God it’s Monday newsletter.


