Home Estate Planning Why using ChatGPT images could cost you down the line

Why using ChatGPT images could cost you down the line

by
0 comment

ChatGPT’s new image generator is no doubt tempting for forward-looking marketeers, but don’t shrug off the legal risks, writes Paul Armstrong

ChatGPT’s new image tools didn’t go viral because they were useful, they went viral because they made people feel seen, literally. With a few prompts, users could generate stylised portraits in the aesthetic of Studio Ghibli, Pixar, dark anime, dreamcore and anything else the model could mimic. Nothing that you couldn’t do before, but now it’s inside the mothership, and that’s the key. 

What followed was less about AI and more about identity and utility. People shared versions of themselves that felt curated, elevated, optimised. The numbers spiked. ChatGPT hit record usage levels, not because it helped people do more, but because it was right there, and offered a way to see and share stylised versions of the self. Identity became a product feature.

Behind the Ghibli glows and Pixar polish sits a growing problem. The same tools now fuelling mass adoption are dragging businesses into legal and operational chaos. Popularity has outpaced governance. Many of these visuals rely on prompts that reference recognisable IP. When someone asks for “a portrait in the style of Ghibli”, the result is more than a nod. Most of these tools were trained on datasets scraped from the internet, often including copyrighted works. Consent from rights holders was rarely sought, let alone granted. Studios like Ghibli and Disney are not quietly observing. Legal teams are already preparing action, and enforcement won’t be manual.

The consequences of using AI images

Rights holders are expected to deploy their own AI systems trained to identify infringing content in real time. Amazon already does this for themselves, and brands on their site. Detection will be automated, and takedown notices will arrive in volume. The same technology used to generate content at scale will now be used to police it. Platforms like OpenAI, Midjourney and Stability will be the first targets, facing pressure to gate inputs, apply content filters and more aggressively watermark outputs. Yet this enforcement pressure will quickly cascade to businesses using these platforms commercially. Marketing teams, product designers and brand managers relying on AI for content will find themselves dealing with flagged assets, takedowns and sudden policy shifts.

A campaign built on AI visuals might be pulled with no warning. A product launch featuring generative artwork could be delayed due to IP disputes. In some cases, internal teams may not even realise a style crosses the line until a complaint lands. The risks aren’t just legal – they’re reputational. Consumers may perceive derivative AI content as lazy or dishonest. With tools like Elon’s Grok monster with few guardrails, the issues multiply quickly. Lax Copyright infringement is one concern, but the broader issue is brand authenticity. In a media environment increasingly saturated with AI-generated content, sameness is a liability. What once felt novel is already becoming generic.

Visual fatigue is setting in. Audiences can already spot AI imagery, and reactions are shifting from fascination to mistrust. Generative content often shares the same visual tropes: glossy textures, symmetrical faces, over-clean lines. Creative distinction starts to blur. Brands that rely too heavily on generative aesthetics risk becoming indistinguishable. Worse, they may appear as if they’re hiding behind the machine. In industries where storytelling, originality or visual craft matter, that’s a reputational hit that will cost. 

Liability is another unresolved frontier. If a generated image infringes on IP, responsibility remains unclear. The user wrote the prompt, the platform hosted the model, the developer built the architecture. Without legal precedent, no one knows who owns what – or who pays when it goes wrong. Insurers are already moving faster than regulators. Some are reviewing commercial liability policies and starting to exclude claims tied to AI-generated media. If that trend continues, businesses experimenting with generative tools could find themselves exposed, both creatively and financially.

Platform risk is escalating in parallel. Content flagged as infringing – even mistakenly – can be removed with no route to appeal. Entire accounts could be restricted, suspended or deprioritised by moderation systems. For companies using generative tools at scale, a single misstep could jeopardise visibility, ad spend or customer reach. The more content produced, the greater the risk.

Some businesses are beginning to implement safeguards. Internal review workflows, style audits, prompt restrictions and AI policy guidelines are being introduced. Creative teams are being briefed on how to use generative tools without leaning on protected styles. Legal teams are quietly assessing model provenance and contract clauses. These efforts are a start, not a solution.

Fighting AI with AI

Automation on both sides is increasing. Generative speed creates risk velocity. AI-generated content will be met by AI-enforced compliance. The same systems being used to scale identity creation will be used to enforce style boundaries. We’re unlikely to see a “move fast and break things” environment hold up with this issue. Licensed style marketplaces are likely to emerge, giving companies access to protected aesthetics through official channels, with royalties baked in. At scale, this could normalise a new layer of content licensing built directly into generative workflows. That infrastructure doesn’t exist yet, and, until it does, most businesses are operating without legal clarity. Not good. 

Due diligence now has to move upstream. Teams must assess model origins, understand dataset licensing (or lack thereof) and start building internal playbooks for AI-generated media. Legal grey zones, platform volatility, insurance gaps and creative dilution are no longer theoretical – they are immediate. Speaking at TBD ‘Eigengrau’ recently, Simon Paterson MBE, US head of counter disinformation for Edelman, explored the trust barometer, and spoke about the issues with trust and corporate use of AI-infused media: “Risk is often delegated to the back of the queue when we’re talking about communication challenges, but it needs to come to the front. It needs to be more proactive, preemptive and persistent. We need to be out there building resilience ahead of time across the key areas that enable [a company] to operate.”

Generative media is fast and impressive. It’s also unpredictable and expensive if mishandled. Businesses need to choose their exposure level wisely. Scaling without structure creates vulnerability. Avoiding AI entirely may cost agility. The sweet spot lies in using these tools with precision, policies and a plan. The avatars were fun. While the adoption spike was easy to forecast, the clean-up is going top be anything but, and will cost everyone in multiple ways.

Paul Armstrong is founder of TBD Group and author of Disruptive Technologies

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?