AI in recruitment: Levelling the playing field or entrenching bias?

AI in recruitment aims to reduce hiring biases and streamline candidate selection. But do these tools truly make hiring fairer, or are they reinforcing hidden biases? Asks Paul Armstrong

The AI hiring wave has arrived, promising to tidy up recruitment’s inefficiencies with sleek algorithms and predictive analytics. Job descriptions are optimised for clicks, CVs sifted in seconds, and interview scheduling is a thing of the past. Companies market it as progress, claiming these tools can strip bias from the hiring process. But beneath the sheen of efficiency, one question looms: are these systems levelling the playing field or reinforcing age-old biases under a high-tech veneer?

Take Amazon’s famous cautionary tale. Everyone’s favourite cardboard abuser’s now-infamous AI recruitment tool systematically penalised CVs mentioning “women’s” activities. Why? The training data was drawn from years of male-dominated hiring patterns. Rubbish in, rubbish out. Despite these failures, adoption of such systems is surging, with platforms like Hirevue promising “bias-free” candidate evaluations via facial analysis and tone detection — tools that have faced accusations of pseudoscience. For instance, critics argue that facial analysis perpetuates race and gender biases due to the datasets these tools are trained on tend to be white faces (and hands) , making it harder for marginalised candidates to succeed.

Several tools are “actively working” to tackle these biases. Pymetrics, for example, blends neuroscience games with machine learning to evaluate candidates’ cognitive and emotional traits. Crucially, it audits its algorithms to identify and eliminate bias. Seekout enables companies to tap into diverse talent pools by identifying underrepresented groups. And Textio analyses job descriptions, providing insights to make language more inclusive and appealing to broader audiences. All fine, but are they getting to the crux of the issue? I would argue not. 

But even with these advancements, the paradox persists. Recruitment AI optimises for “best-fit” candidates — but who defines “best”? These algorithms often rely on patterns found in historical data, leading organisations to inadvertently replicate the status quo. Particularly troubling when innovation demands a diverse workforce. Hiring outliers — those who think differently or challenge assumptions — is critical for disrupting conventional wisdom. Yet, algorithms trained on historical norms will usually reject such candidates outright.

The ethical dilemmas don’t end there. Behavioural economists have long known that bias isn’t an anomaly — it’s deeply embedded in human decision-making. When these biases feed AI systems, they don’t disappear; they scale, becoming harder to spot and challenge. Tools designed to root out bias risk becoming gatekeepers of inequality, their flaws buried in black box systems no one fully understands.

AI’s potential to drive equality isn’t theoretical — it’s achievable with the right practices. For instance, synthetic data is an emerging (somewhat controversial) method for training AI systems without replicating historical biases. By creating artificial datasets that mimic real-world scenarios while erasing discriminatory patterns, synthetic data could offer a path toward fairer algorithms.

Bias auditing tools are also gaining traction. Fairlearn and Aequitas, for example,  provide frameworks for identifying, measuring, and mitigating bias in machine learning models. These tools allow organisations to scrutinise the decisions their AI systems make and adjust their processes accordingly.

Transparency is key

Explainable AI (XAI) is an essential development, offering insights into why an AI system made a specific decision. This could help organisations identify flaws and allow candidates to challenge decisions they perceive as unfair. For example, if an algorithm rejects a candidate based on certain keywords, XAI tools could highlight this and prompt recruiters to question whether those terms truly reflect job requirements.

Regulation is catching up

The EU AI Act categorises recruitment algorithms as high-risk systems, subjecting them to stricter transparency and accountability measures. In the UK, the Information Commissioner’s Office (ICO) has issued guidelines on using AI in recruitment, ensuring compliance with GDPR principles. Meanwhile, the US Equal Employment Opportunity Commission (EEOC) is pushing for greater oversight on how AI tools affect hiring decisions. Of course, all these lack granular details on enforcement or company compliance.

But regulation alone isn’t enough. Many organisations are turning to ethical frameworks, such as IBM’s AI Fairness 360, an open-source toolkit that helps developers test for and mitigate bias. These tools are vital, but their adoption remains voluntary, leaving room for misuse or neglect.

Beyond these frameworks, AI is remains in experimental territory in this area. Generative AI, like ChatGPT, is being trialled to create personalised candidate assessments, offering real-time feedback during application processes. Similarly, VR and AR technologies are being used to simulate real-world scenarios, providing immersive evaluations of candidate skills and decision-making. Candidates are fighting back too, using feature-tracking software to allow them to look at notes or AI tools while being questioned by humans and avatars

There’s also growing interest in neurodiversity-focused AI, designed to ensure recruitment processes accommodate neurodiverse candidates. These tools adjust assessments to account for cognitive differences, making hiring more inclusive. But they also raise questions about privacy and consent: how much data should candidates be required to share?

Emotion AI and sentiment analysis remain controversial but used. While proponents argue they provide deeper insights into engagement and confidence, critics point out their susceptibility to cultural and personal biases, rendering them unreliable for critical decisions. Perhaps AI will be the great leveller for tired, overworked, emotionally drained Simon at 4:30pm on a Thursday?

AI in hiring isn’t inherently bad. It’s a mirror, reflecting our flaws back at us

AI in hiring isn’t inherently bad. It’s a mirror, reflecting our flaws back at us. To make it work, organisations have to ensure they chase more than chase efficiencies. Begin with really questioning the data the tools are built on and the outcomes they are optimising for. AI has the potential to make hiring fairer, but only if companies take responsibility for how these tools are designed and deployed. Well designed HR AI keeps people in the recruitment process, it doesn’t replace them. AI isn’t a fix for flawed hiring practices; it’s a force multiplier. Left unchecked, it amplifies bias, but if businesses choose to, it could completely change cultures, fix rifts and drive innovation. 

Paul Armstrong, founder of TBD Group, helps companies navigate emerging technologies, to innovate and avoid disruption

Related posts

Anglo American: FTSE 100 miner agrees sale of remaining coal business

B&Q and Screwfix owner Kingfisher: UK sales tick up despite budget uncertainty

Why Premier League has opened door to a ‘Netflix of Football’