Ofcom’s inquiry into the systemic risks posed by Elon Musk’s Grok AI for enabling the creation of sexualised images on X has created a significant, foreseeable liability for UK businesses, says Paul Armstrong
Many UK businesses seem unwilling to admit it, but they are now operating in dicey waters on X, that. Is it time to stop Grok and roll?
Ofcom has moved from commentary into procedure over Grok enabling sexualised AI imagery to be created and then put on X, a move that changes the nature of exposure to anyone, or company, using X. While Musk believes this to be a freedom of speech issue, in reality, most companies aren’t in business to fight for his right to allow people to make sexually explicit content and distribute it next to their tweets about 10 per cent off their next pack of Wotsits.
Grok is Elon’s answer to ChatGPT, and in recent days he’s moved to curb, not stop, people making pornographic images using the tool in the hopes to calm Starmer and pals. Elon’s free speech push is, simply put, going to cost most businesses money to check risk exposure, flip marketing plans, and more. Boards should recognise this moment clearly, because regulators rarely open systemic investigations without believing leverage already exists.
Corporate behaviour often waits for outcomes whereas governance tends to work on signals and direction of play. Many companies will be looking at their campaign performance, follower counts, and crisis playbooks asking: is it worth it? Mere presence becomes a defacto posture on the side of a ‘debate’ most don’t want to discuss at work. Insurers will be rubbing their hands gleefully, while risk and compliance pop paracetamol. Expect to be asked why the platform remained acceptable while pornography was being actively created and distributed. When employee safety, impersonation, and harassment risks multiply, it’s time to get out of Doge, dodgy, dodge.
What changed and why it matters
Generative AI tools are shortening the distance between experimentation and damage. Ofcom’s inquiry is not about moderation misses or individual posts slipping through, their worry is that the issues that sit at the level of system design, like safeguards, and foreseeable misuse under the Online Safety Act are being ignored. Grok sits at the centre because capability matters more than content. Ask yourself if reasonable steps were taken relative to predictable harm, because the courts are now going to.
Capability introduces liability even when usage remains uneven or opportunistic, a lesson that Musk is about to find out. Restricting access doesn’t mean squat if misuse remains foreseeable at scale. In essence, Ofcoms wants to highlight that design choices don’t meet the threshold of reasonable mitigation, therefore moving the issues from community standards to architecture and incentives. All of which creates column inches, keeps Musk in the headlines, massages his ego, and with +$750 billion he has eff-you money for Starmer and the UK for eons.
Starmer and pals don’t seem all that worried, their language has already shifted from the block being unthinkable even if the how and likelihood remain widely contested. Businesses should be in contingency planning territory, not hypotheticals. The larger question is would we miss X? A few years ago, most would say yes, but millions have left the platform because of right-wing accounts, trolls, and Elon’s every move being pumped into your eyeballs every time you fire up the app. The data suggests businesses aren’t that worried if X goes the way of dodo – UK revenues are down 60 per cent due to content worries.
Brands who remain on X right now have an important decision to make. Continue flogging an increasingly dying bird, or wait for leaders and boards to ask them why they’re optionally allowing such a risk. Continued corporate exposure versus having a profile for marketing is an interesting move when that network is just 10 per cent of social media visits in the UK which is less than it did back in April 2017.
Beyond the sexual imagery, there’s more issues around second order effects like harassment, executive impersonation etc. Ask the Hong Kong gentleman who approved a fraudulent +£20m transaction because of an AI deepfake if this is a problem. Examples like this are becoming more common as people struggle to tell the difference between AI and real people.
Second order effects compound quickly. Employee safety considerations escalate once impersonation, harassment, and non consensual imagery are accepted by regulators as predictable outcomes rather than edge cases. Ditching X now gets you ahead of all of this woe, and you might just save some money on customer service, marketing etc. Do your customers trust you more or less because you are on X?
Governance over optics
Debates about censorship miss the commercial point entirely. UK law already separates free speech from distribution, the courts settled that distinction long before generative AI entered board agendas. A lot more red teaming is happening by AI teams now to specifically anticipate issues and show proportionate safeguarding. Elon doesn’t seem to want to do the due diligence and seems content to simply sow discontent. When you have +$750bn and can pay any fine levied at you, cutting Elon off from a drama pipeline may just be what’s best for an entire nation. Country bans aren’t unheard of, something ChatGPT found out when Italy temporarily banned it in 2023. Businesses have a choice to use these platforms, a lot use them because of sunk cost fallacy, and most can probably put X into this bucket if being honest with themselves.
Nothing is set yet; the courts could constrain Ofcom, X could alter the safeguards. Waiting for ultimate clarity will potentially damage brands, increases risk unnecessarily, and further stretches resources. The question leaders and boards need to ask themselves is, is X a need-to-have channel? X’s numbers probably don’t support the drama for most, but they keep it around in case the worst happens. Perhaps it’s time to really get your affairs in order?
Over the next three years, precedent will be set around how generative AI capability is governed rather than tolerated. Early decisions will shape audit trails, insurance terms, and future defensibility. Once a platform enters active statutory investigation for systemic failure, still being there stops looking neutral and starts looking like negligence or a board level decision. Those that get ahead of tools that don’t seek to minimise drama and push agendas will increasingly, like X has done, find themselves on the wrong side of the corporate chequebook. If all else fails, ask yourself, what’s the upside? For most these days on X, that upside is marginal at best, at worst it shows you’re not a serious company.