When, in December, Australia enacted a nationwide ban of social media platforms for under-16s, critics called it heavy-handed, and easy to bypass.
This week, as Elon Musk’s AI chatbot Grok flooded the social media platform, X, with AI-generated explicit photos of women and girls, some even involving minors, that argument begins to lose its legs.
The scandal has put the UK’s recent Online Safety Act under strain, reopening questions ministers had hoped to park.
Is platform by platform regulation enough? Or else, should Britain consider an outright, age-based ban similar to Australia’s?
British communications regulator Ofcom has launched a formal investigation into Musk’s increasingly controversial platform, warning the firm could face fines of up to 10 per cent of global turnover (in X’s case around £215m) if it is found to have breached its duties.
Tech secretary Liz Kendall has described the content as “vile” and confirmed new criminal offences targeting non-consensual deepfake imagery will be rushed into force.
And for policy, Grok is evidence that the regulatory model itself may already be creaking.
A law chasing a faster technology
The Online Safety Act assumes a familiar structure where users post content, platforms distribute it, and regulators intervene when ‘harm’ appears.
But generative AI collapses that chain. In Grok’s case, the chatbot doesn’t just show the content, it produces it at speed, embedded directly into a mainstream social media feed.
X has attempted to draw a distinction between its platform and its AI subsidiary, xAI.
“We’ve created this environment online where everyone can be anonymous and see anything”, John Wilkinson, chief executive of age-verification firm TMT ID, told City AM.
“Then we end up debating whether something like Grok is free speech. That just feels absurd.”
Ofcom has acted quite swiftly, but the Grok case is a far harder test than previous enforcement action against smaller operators or offshore pornography sites.
And if enforcement falters here, the perception will be that the Act only bites when the targets are easy.
Why Australia looks different
Against that backdrop, Australia’s under-16 social media ban starts to look like a slightly less radical option.
Since implementation, platforms like Meta’s various apps and Tiktok have been legally required to prevent children from holding accounts.
In the first days of enforcement alone, Meta blocked more than 550,000 accounts.
Essentially, Australia is saying to the tech giants: this is the outcome we want, you work out how to do it.
From Wilkinson’s perspective, the policy’s strength lies in what it does not prescribe. “That matters because platforms are best placed to implement the solutions, not governments”, he said.
Critics have pointed to children using VPNs, alternative apps or parental logins to bypass the ban. Wilkinson argues that misses the point.
“That’s still success” he argues. “You’ve moved the problem from effortless access to something that requires intent and effort”.
As a parent, he adds, the policy also changes the power dynamic at home. “It gives parents real backing. It’s much easier to say no when the system supports you.”
The business case for age verification
Meanwhile, for the tech sector, the direction of travel seems obvious.
Demand for age verification is accelerating, with TMT ID saying its age and identity services are up 80 per cent this year, as platforms prepare for tougher rules across various jurisdictions.
Wilkinson told City AM: “We’ll look back and wonder how we allowed completely anonymous access to everything”, pointing to age checks set to become routine online.
The debate, he claims, should focus less on whether age verification works, and more on how it is implemented. Heavy handed documentation checks, for example, remain intrusive and unnecessary.
And above that, “you don’t want a database of children”, he added. “The best systems estimate age, approve or reject access, and then throw that data away”.
This approach, already seen in sectors like alcohol sales or online gaming platforms, mirror the offline world, simply recreating what usually happens in a shop.
Ministers do remain cautious with regards to a nationwide ban, with Keir Starmer having previously opposed this blanket approach.
He argued that controlling content matters more than restricting its access, despite senior figures having admitted to keeping a close eye on Australia’s roll-out.
Whether the UK ultimately follows Australia, or adopts time limits like we see in some US states, or even rewrites its AI rules altogether, Grok has put pressure on regulators who can’t afford to wait before drawing firmer lines.