Gemini Image Bias

Comments discuss Google's Gemini AI generating historically inaccurate images by enforcing ethnic and gender diversity, such as non-white founding fathers or Vikings, and refusing to produce images of white people when prompted.

📉 Falling 0.3x AI & Machine Learning
1,769
Comments
18
Years Active
5
Top Authors
#9755
Topic ID

Activity Over Time

2008
1
2009
7
2011
4
2012
6
2013
5
2014
14
2015
20
2016
61
2017
55
2018
73
2019
118
2020
177
2021
204
2022
190
2023
221
2024
504
2025
107
2026
2

Keywords

theverge.com AFAIK US AI LLM WW2 Beauty.ai motherboard.vice DALL GPT white white people images prompt image pictures generate training data ai training

Sample Comments

somnic Feb 22, 2024 View on HN

If it's not going to give you what it's promising, which is generating images based on the prompts you provide it, it's a poor service. I think it might make more sense to try determine whether it's appropriate or not to inject ethnic or gender diversity into the prompt, rather than doing so without regard for context. I'm not categorically opposed to compensating for biases in the training data, but this was done very clumsily at best.

int_19h Oct 15, 2023 View on HN

Image models tend to have a lot of bias wrt assuming things like race and gender based on context when not given specific instructions.

bjord Feb 26, 2024 View on HN

99% sure this is the "google hates white people" thing that a specific set of people have been absolutely losing their minds aboutgemini produced images of non-white people in a lot of situations in which it shouldn't haveI've read theorized(?) that, in order to counteract disproportionately large amounts of pictures of white people in training data, they basically added instructions after the fact in an effort to generate more non-white people, and totally over-correct

rasz May 1, 2025 View on HN

This and another example in reddit comments both converge on black male regardless of starting image.https://www.theverge.com/2024/2/21/24079371/google-ai-gemini...

vsnf Nov 28, 2023 View on HN

I think this is the right way to handle it. Not all cultures are diverse, and not all images with groups of people need to represent every race. I understand OpenAI, being an American company, to wish to showcase the general diversity of the demographics of the US, but this isn't appropriate for all cultures, nor is it appropriate for all images generated by Americans. The prompt is the right place to handle this kind of output massaging. I don't want this built into the model.Edit:

randomdata Feb 23, 2024 View on HN

> Then I asked Gemini to stop doing that / tried specifying racial backgrounds... Gemini refused.When I played with it, I was getting some really strange results. Almost like it generated an image full of Caucasian people and then tried to adjust the contrast of some of the characters to give them darker skin. The while people looked quite photorealistic, but the black people looked like it was someone's first day with Photoshop.To which I told it "Don't worry

WillPostForFood Feb 22, 2024 View on HN

The problem you’re describing is that AI models have no reliable connection to objective reality.That is a problem, but not the problem here. The problem here is that the humans at Google are overriding the training data which would provide a reasonable result. Google is probably doing something similar to OpenAI. This is from the OpenAI leaked prompt:Diversify depictions with people to include descent and gender for each person using direct terms. Adjust only human descriptions.

Spivak Feb 22, 2024 View on HN

Because it's lazily interesting races like "of descent." You can try it with most models and get the same results. Try with prompts like "an ethnically white mechanic of african descent" or "a white german woman of hispanic descent" and you'll see that the non-white races win because images of white people aren't often labeled as such but images of other races are and have a strong association."Ethnically swiss woman in traditio

ThrowawayTestr Feb 13, 2024 View on HN

"we only used pictures of white people in our training data, this is society's fault"

pb7 Feb 22, 2024 View on HN

Congratulations, here is your gold medal in mental gymnastics. Enough now.It literally refuses to generate images of white people when prompted directly while not only happily obliging but only producing that specific race in all 4 results for all others. It’s discriminatory and based on your inability to see that, you may be too.