The purpose of generative AI is to generate the most statistically likely output. In human terms, that means mediocrity at scale, not diversity, says Paul Armstrong
Corporate diversity has turned into performance art. Generative AI now makes accessibility look effortless as captions appear automatically, transcripts compile themselves and summaries glow with the language of clarity. Every dashboard screams inclusion, yet the output often distorts meaning, erases nuance and misrepresents identity. Businesses think they are building bridges; in reality, they are installing mirrors. Technology that promises to open doors is quietly narrowing the frame of who gets seen and how.
Corporate leaders love metrics because metrics give the illusion of progress. AI has supercharged that illusion. Accessibility scores rise, inclusion reports grow glossier and no one checks whether the experience behind the numbers has actually improved. Algorithms trained on the narrowest slices of humanity now translate, caption and summarise for the world. The result is not inclusion but homogeneity disguised as help. The diversity wallpaper looks good from a distance; up close, it is peeling.
The economics of fake empathy
The business case for accessibility is vast. The Valuable 500 estimates the global disability and inclusion market at £13 trillion. That figure is now quoted in every corporate deck about AI for good. Companies chasing it often invest more in optics than outcomes. Automation feels efficient, but AI systems built to remove friction also remove individuality. Translation models trained to neutralise cultural bias flatten tone, humour and dialect. Captioning software built for clarity sanitises emotion and strips personality from speech.
The Trump administration’s aggressive rollback of diversity and inclusion initiatives in the US has sent shockwaves through global boardrooms, prompting reversals and strategic hesitation from companies wary of political reprisal. The chill is spreading faster than the policy, and it shows in how many companies now talk about inclusion without ever funding it. Despite sitting on reserves larger than small nations, too many are showing a remarkable lack of courage.
Lisa Riemers, a globally recognised communications expert, and co-author of ‘Accessible Communications’ thinks “the business imperative for accessibility is clear. The legal landscape has shifted, even if it’s perceived to be falling out of fashion. Tech companies are over-promising fast fixes to meet obligations while often making things worse. Generated captions and alternative text are better than missing them out completely – but can create a false narrative that doesn’t include what’s important, over-correct names, and struggles to recognise different accents.”
Corporate appetite for this illusion is understandable. Generative AI delivers instant deliverables: polished, inclusive-looking content without human oversight. Fake empathy is dangerous because it makes leaders feel righteous while detaching them from reality. When every brand presentation, video, and press release looks effortlessly inclusive, few notice when meaning is lost or people are misrepresented.
The problem extends beyond internal communications. Aid agencies are not immune, they (or the agencies they used) have circulated AI-generated poverty porn, fake images of suffering created to attract donations. Those pictures were designed to evoke compassion but succeeded only in trivialising real lives. Businesses are heading down the same road when they use generative models to depict diversity that does not exist. Gone are the days of changing a skin tone with photoshop, now there’s a generative AI tool to create a brand new ‘human’. Synthetic representation easily becomes a fake substitute for real inclusion, and the cycle continues.
The drag to the statistical middle
Generative AI systems are built for prediction, not perception. The purpose is to generate the most statistically likely output. In human terms, that means mediocrity at scale. Diversity gets sanded down to the mean. Models drag everything to the middle: language, tone, and identity. Internal audits at several large tech firms have shown how bias mitigation protocols, designed to make systems fairer, often erase cultural or linguistic nuance entirely. The result is bland uniformity presented as ethical progress.
Consequences are already visible. Recruitment AI built to anonymise CVs has been caught penalising candidates with unusual names. Sentiment models used in customer feedback tools struggle to parse dialects or code-switching, tagging them as aggressive or unclear. The risk is not that AI offends, but that it quietly excludes. A system that cannot see difference cannot serve it.
Reform MP Sarah Pochin’s ‘wrong and ugly’ take on minority ‘overrepresentation’ in adverts shows how fragile Britain’s culture war ego still is. Public sentiment is shaped by what producers show and by what the people who design algorithms decide to show. When models misread or erase minority presence in datasets, they do not just misinform companies but distort cultural understanding. Businesses relying on those tools absorb bias invisibly, believing they are objective while automating discrimination.
The inclusion strategy that actually works
Boards crave a simple narrative: technology equals progress. Leadership teams now face a harder truth. Inclusion cannot be automated; it must be designed. Companies serious about accessibility must build human verification loops. Every AI-generated caption, alt text, or translation needs a human check by someone who understands the context behind the content. Inclusion is not a software feature; it is a process of continuous correction.
AI suppliers should be audited for demographic balance in training data, with performance metrics published openly. Just as firms report carbon emissions or pay equity, they should begin reporting AI bias rates. If an accessibility feature underperforms for non-standard voices or minority groups, it should be disclosed. Hiding bias behind proprietary algorithms is not innovation; it is abdication.
Businesses must also broaden how they define inclusion. The goal is not only to communicate clearly but to communicate honestly. That requires diverse creative teams, human editors, and lived experience in the loop. Companies should stop thinking of accessibility as a marketing line and start treating it as infrastructure, part of how products and communications are built from day one.
Leaders have a chance to use AI to extend inclusion rather than fake it. Giving marginalised voices access to the tools, not just making them the subject of the output. Those that don’t will miss out on a slice of the $13 trillion, see higher hiring costs, rising litigation risk, lost innovation value, and declining consumer trust. The illusion of inclusion is not cheap, but no AI-powered PR campaign will repair higher turnover, lower productivity, and reputational damage.
Beyond the illusion
The coming year will reveal which businesses are serious about inclusion and which are addicted to automation theatre. AI has not made diversity easier; it has made faking it effortless. Companies that mistake visual representation for ethical transformation will find themselves accused not just of bias, but of dishonesty.
Executives must remember that diversity dashboards and accessibility scores are not evidence of progress; they are only as good as the judgment behind them and the outcomes that flow from them. The illusion of inclusion flatters leadership into thinking the job is done, when in reality, the hardest work is still human. Businesses that use AI to enhance empathy, not replace it, will build trust and credibility that no model can manufacture. The rest will drown in peeling diversity wallpaper, wondering why the room still feels empty.