Google has recently issued an apology for what it describes as “inaccuracies in certain historical image depictions” generated by its Gemini AI tool. The tech giant’s AI technology, known as Gemini, is designed to create images based on user prompts. However, it has faced criticism for producing historically inaccurate depictions, particularly in relation to racial representation.

The apology comes in response to mounting concerns raised by users who have noticed discrepancies in the images generated by Gemini. Specifically,  there have been instances where historically white-dominated scenes have included racially diverse characters. The company acknowledges that its efforts to generate a diverse array of results have fallen short, particularly in instances where specific white figures (such as the US Founding Fathers).

In its statement addressing the issue, Google acknowledges the shortcomings of its Gemini AI tool and pledges to rectify the inaccuracies. The company emphasizes its commitment to improving the technology to ensure more accurate and culturally sensitive image depictions in the future.

According to Google’s statement, which was posted on X this afternoon, “We recognize that Gemini is producing inaccuracies in certain historical image depictions, and we are actively working to address these issues. While Gemini’s AI image generation aims to offer a broad representation of people, its current results are not meeting expectations in this context.”

On Thursday, Google announced that it is temporarily halting the functionality of its Gemini artificial intelligence chatbot, which generates images of people. This decision comes just a day after the company issued an apology for what it described as “inaccuracies” in the historical depictions produced by the chatbot.

Users of Gemini took to social media this week to share screenshots of scenes traditionally dominated by white individuals but featuring racially diverse characters generated by the AI. This sparked criticism and raised concerns about whether Google is overcompensating to mitigate the risk of racial bias in its AI model.

Andrew Torba the founder and CEO of Gab, an online conservative platform sharply criticized Google, saying that the tech giant’s handling of historical image depictions through its Gemini AI tool falls short of expectations, “Google has to literally put words in your mouth by secretly changing your prompt before it is submitted to the image generator.”

Andrew Torba: “Why Google’s Image AI Is Woke and How It Works

When you submit an image prompt to Gemini Google is taking your prompt and running it through their language model on the backend before it is submitted to the image model.

The language model has a set of rules where it is specifically told to edit the prompt you provide to include diversity and various other things that Google wants injected in your prompt.

The language model takes your prompt, runs it through these set of rules, and then sends the newly generated woke prompt (which you cannot access or see) to the image generator.

Left alone without this process, the image model would generate expected outcomes for these prompts. Google has to literally put words in your mouth by secretly changing your prompt before it is submitted to the image generator.

How do I know this? Because we’ve built our own image AI at Gab here: gab.ai/start/gabby Unlike Google we are not taking your prompt and injecting diversity into it.”

Examples Of Google Gemini AI Generated For Various Prompts:

Gemini’s results for the prompt “generate a picture of a US senator from the 1800s. Given the historical context, one might expect the generated images to depict White men predominantly, reflecting the demographics and norms of that era’s political landscape.

Frank Quotes Press