Home Estate Planning Google Gemini’s bias problem is just the start in AI diversity disasters

Google Gemini’s bias problem is just the start in AI diversity disasters

by
0 comment

As missteps like Google’s Gemini shows, avoiding bias in AI is no quick fix, writes Rebecca Gorman

Last week, Google attempted to avoid making the same mistake Whatsapp made last year when it released a generative “sticker” creator that responded to the prompt “Palestinian child” with conjuring up the image of a child holding a gun.   

In what appears to be an over-correction to such problems with AI biases, Google’s team seemed to have trained its own image generator Gemini to indiscriminately include racial and gender diversity in the images it generates. This was Google’s attempt to align their AI, and it was a rookie one.

The very prompt engineering designed to prevent racism ended up instead conjuring up images of racially diverse Nazi soldiers. Like the proverbial genie, give a 2024 AI an instruction and the AI will follow it to the letter – even when we really wish it wouldn’t.

The fact that generative AI circa 2024 performs this way isn’t a surprise. The industry has been experimenting with generative AI for years now, and this is well known behaviour. The surprise is that Google, powered by AI Adtech and the inventor of frontier AI, didn’t anticipate this mistake.   

Last year saw a dramatic swell in enterprises investing in in-house generative AI capabilities and experimenting with the potential of foundation models, reflected in Nvidia’s share price just this week. 2024 has already seen a corresponding ebb towards realism as enterprises experimenting with said models have tested their capabilities and limitations, helping them to develop an understanding of frontier AI capabilities that is based on experience rather than led by hype.  

We are hitting a critical moment in the ‘frontier AI’ lifecycle, with the public being suddenly and rudely awoken by the illusion of human-like understanding. The Gemini furore has served as a very visible case study that generative AI doesn’t understand concepts like ‘don’t be racist’ after all; and we are finally able to entertain the possibility that ‘frontier AI’ is, after all, merely repeating what it has heard or seen like a trained parrot. 

This week has seen a surge in news reports about massive funding rounds for AI businesses such as Figure AI and Magic AI. But without conceptual understanding, robust alignment of AI systems isn’t possible and debacles like Gemini, Air Canada, Whatsapp and Cruise will continue to haunt companies deploying AI-driven products.

The hope now is that other enterprises learn from Google’s mistakes and exercise reasonable caution. While many companies are jumping on the generative AI bandwagon, behind the scenes, we’re hearing that AI apps are not attaining high enough reliability and robustness, or functionality, to introduce to the marketplace.

The ideal scenario may to be able to direct AIs through simple prompts to follow human instructions, but the reality is we are not there yet. To quote an industry insider, “it’s only very brave people, or fools, who are deploying gen AI systems into production”.

Business leaders should exercise caution still with AI and invest in alignment and human verification to avoid being the next Gemini or Air Canada hitting the headlines.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?