New rules under the Online Safety Act came into force on Monday, requiring tech firms to find and remove illegal content, or risk facing heavy fines.
The law applies to around 100,000 online services including social media platforms, online forums and messaging apps.
It has given the UK watchdog the power to enforce stricter content moderation to ensure safe online practices across the country.
Natalie Greene, online safety expert at PA consulting, said: “The effects won’t be seen overnight, but the Online Safety Act is a transformative piece of legislation.”
“It is the UK’s first comprehensive law tackling illegal and harmful content online, which means it has the potential to change the shape of the internet for years to come”.
Technology secretary Peter Kyle said: “In recent years, tech companies have treated safety as an afterthought. That changes today. This is just the beginning.”
“The Online Safety Act is not the end of the conversation; it’s the foundation.”
Impact on UK business sector
The Act places significant new compliance burdens on UK based platforms, particularly for smaller firms.
While larger organisations like big techs Meta or X have already significantly invested in moderation guardrails, many smaller platforms could struggle to meet the financial and operational demands that come with compliance.
“There is always a risk that automated detection systems will be unable to account for various contexts”, Ben Packer, partner at Linklaters, told City AM. “But no one expects a platform to be completely rid of harmful content”.
He also warned that “small firms who operate only in the UK and face issues complying may shut down” those parts of their business, if not altogether.
Larger firms, on the other hand, “will have those in place anyway”.
This is the biggest risk. Some businesses may be forced out of the UK market rather than adapting to new regulations, particularly global platforms with limited market share in the UK.
Sensitive content platforms like Pornhub have started doing this in America, shutting down parts of its operations in places like Kentucky, due to legislative issues and compliance costs.
For UK firms, the cost of compliance remains the biggest hurdle.
Implementing automated detection tools, conducting various risk assessments or hiring moderators to check stability, all require vast amounts of investment.
While this could be unsustainable for smaller platforms, regulatory uncertainty makes it hard for all businesses to predict the level of scrutiny they could face.
Age verification
Age verification and content moderation requirements are expected to expand further, too.
Ria Moody, TMT managing associate at Linklaters, told City AM: “Age regulation will be huge – whether that is a whole site of adult content, or parts of it.”
She described that some websites will be able to implement age regulation to parts of their website, almost like a paywall would.
Adult content platform Onlyfans, the UK’s biggest tech company, has welcomed the new Act.
Having been under investigation with Ofcom for age regulation in 2021, and again in 2024, the platform now requires a nine part identification process for its creators.
Yet for fans, it relies solely on facial scanning technology, provided by a third party organisation, to check its users’ age.
It’s ‘challenger age’ – the age at which it prompts additional verification, is set at 21.
The firm has applied varying verification processes based on location and jurisdiction, such as age verification in the US, or data retention in its German market.
Yet, there is no standardised approach to age verification within the adult content industry in sight.
What happens next?
Ofcom have been enforcing illegal content rules since Monday, with a focus on tackling the most serious harms first.
Moody explained that these include child sexual abuse material, terrorism and fraud.
The department of science, innovation and technology (DSIT) revealed on Monday that “a major focus of these new regulations is the detection and removal of child sexual abuse content.”
Online predators are increasingly exploiting digital platforms to commit crimes, with 290,000 recorded removed instances of this material last year.
Companies must submit risk assessments to Ofcom, detailing how they plan to detect and remove harmful content.
Kyle dubbed the law as a “major step forward”, businesses now face an increasingly complex regulatory landscape that could reshape the UK’s tech space.