TikTok is set to lay off hundreds of staff in its content moderation and security team in London, a decision that has sparked concern just as the UK’s Online Safety Act comes into force.
The company, owned by ByteDance, is centralising its “trust and safety” operations globally and plans to increasingly rely on artificial intelligence to moderate content.
The move, which could affect around 300 London-based staff, is part of a worldwide reorganisation.
TikTok said it aims to focus its operational expertise in a few select locations, including Lisbon and Dublin, to be “more effective and quick.”
The company’s most recent accounts show a 38 per cent year-on-year revenue growth to $6.3bn (£4.7bn), with pre-tax losses falling significantly, suggesting the cuts are not driven by financial distress.
The rise of AI in content moderation
TikTok said that over 85 per cent of content removed for violating its community guidelines is identified and taken down by automation.
The company has argued that AI can also help reduce the exposure of human moderators to graphic or distressing content.
This shift toward automation is not unique to TikTok, it mirrors a broader trend in the tech industry, where platforms like Google and Meta are increasingly pushing advertisers to use AI-powered tools.
But critics, including the Communication Workers Union (CWU), have voiced concern. John Chadfield said that TikTok’s reliance on AI is a way to save money and move moderation to areas where labour is cheaper.
The union also warned: “TikTok workers have long been sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives”.
The Online Safety Act and AI’s role
The timing of the layoffs is particularly significant given the UK’s new Online Safety Act, which came into force on 25th July.
The law requires tech companies to implement “highly effective” age verification measures and prevent the spread of harmful material, or face penalties of up to £18m or 10 per cent of global turnover.
In response to these requirements, TikTok recently introduced “age assurance” controls using machine-learning technology.
However, industry regulator Ofcom has not yet endorsed these AI-based systems.
This creates a tension between a company’s financial incentives to use AI for efficiency and the regulatory demands for verifiable safety mechanisms.
The reliance on AI is also a key part of TikTok’s broader commercial strategy. The company recently made it mandatory for TikTok Shop brands to use an AI ad tool called GMV Max, which automates ad campaigns to maximise sales.
While some advertisers have expressed doubts about ceding control to an algorithm, a TikTok Shop agency partner said the tool has been a “game changer” for some smaller merchants.
As TikTok pushes further into e-commerce and AI-powered moderation, the balance between profit, efficiency, and user safety still remains a central and unresolved issue.