UK businesses could theoretically unlock £532bn in productivity through AI-assisted recruitment, according to new research from LinkedIn.
The report suggests that AI tools, such as LinkedIn’s own ‘hiring assistant’ tool, can free recruiters from administrative tasks like screening CVs, writing job descriptions and scouting talent, allowing them to focus on strategic hiring decisions.
Janine Chamberlin, UK country manager at LinkedIn, told City AM: “Driving AI adoption is more than just a tech challenge, it’s a talent challenge.
“Professionals across all functions need both the right tools and the right training to really unlock AI’s potential.”
LinkedIn cites anecdotal evidence from firms such as Insite, which reported a 20 per cent revenue increase after deploying AI-assisted recruiting.
Across sectors including healthcare, engineering and manufacturing, recruiters are under pressure to deliver faster, more efficient hiring while filling roles with increasingly specialised skill sets.
Subscribe to the Boardroom Uncovered show from City AM here.
AI at every stage
But LinkedIn’s rosy projections are being tempered by broader industry research highlighting fairness and bias concerns.
A survey of 1,000 UK HR and talent professionals by background-checking platform Zinc found that 73 per cent of recruiters now use AI at some point during hiring, but 71 per cent said automation reduces personalisation in the process.
Meanwhile, over a third automate candidate rejections entirely.
Charlotte Hall, co-founder of Zinc, said: “AI is supposed to make hiring smarter, not colder. Candidates want clarity and connection, not an experience that feels generated by a machine. When automation replaces empathy, the relationship breaks down.”
Studies also show that AI tools can perpetuate bias. Research by the Alan Turing Institute and the Institute for Ethical AI in the UK has repeatedly flagged that algorithms trained on historical hiring data can replicate gender, racial, and socioeconomic disparities.
AI systems may favour candidates with backgrounds similar to those already in the organisation, unintentionally disadvantaging women or ethnic minorities, as well as those from less traditional career paths.
Mind the skills gap
Even beyond fairness concerns, the adoption of AI in recruitment is far from universal.
LinkedIn found that nearly eight in 10 UK recruiters say they lack the training to use AI effectively, despite 90 per cent reporting that chief executives are relying on them to build the workforce of the future.
Chamberlin told City AM: “Nearly eight in 10 have told us they don’t yet feel well-equipped to use AI for hiring, that’s something we need to fix.”
Accenture research shows that 80 per cent of AI job postings are concentrated in London, leaving regions outside the capital lagging behind in AI skills and training.
Meanwhile, EY data indicates that tech expertise is increasingly sought on financial services boards, but many firms still lack the internal capability to manage AI-related risks or embed AI ethically.
Efficiency vs empathy
The tension between efficiency and human judgement is becoming acute.
While AI can accelerate candidate screening, 84 per cent of recruiters report that senior hires still take over a month, with 15 per cent taking more than two months, giving disengaged candidates time to look for alternatives.
Meanwhile, 40 per cent of new hires leave within six months due to unmet expectations, pointing to the limitations of generated processes in matching candidates to relevant roles.
Chamberlin argued that AI can be a force for smarter hiring rather than faster hiring alone.
“While improving efficiency and productivity is important, the reason recruiters got into this role is to help people get jobs and find the right candidate for the role”, she told City AM.
“That’s what we are doing with Hiring Assistant, helping companies connect with the right talent with the right skills for the role at hand.”
Yet AI’s promise will not be realised without careful management. Beyond training and tools, companies must implement robust governance, audit algorithms for bias, and ensure human oversight remains central.
Reports from Thomson Reuters and the Institute for Ethical AI indicate that unchecked reliance on automated systems can exacerbate inequality and lead to reputational risks.