Gemini Image Bias
Comments discuss Google's Gemini AI generating historically inaccurate images by enforcing ethnic and gender diversity, such as non-white founding fathers or Vikings, and refusing to produce images of white people when prompted.
Activity Over Time
Top Contributors
Keywords
Sample Comments
If it's not going to give you what it's promising, which is generating images based on the prompts you provide it, it's a poor service. I think it might make more sense to try determine whether it's appropriate or not to inject ethnic or gender diversity into the prompt, rather than doing so without regard for context. I'm not categorically opposed to compensating for biases in the training data, but this was done very clumsily at best.
Image models tend to have a lot of bias wrt assuming things like race and gender based on context when not given specific instructions.
99% sure this is the "google hates white people" thing that a specific set of people have been absolutely losing their minds aboutgemini produced images of non-white people in a lot of situations in which it shouldn't haveI've read theorized(?) that, in order to counteract disproportionately large amounts of pictures of white people in training data, they basically added instructions after the fact in an effort to generate more non-white people, and totally over-correct
This and another example in reddit comments both converge on black male regardless of starting image.https://www.theverge.com/2024/2/21/24079371/google-ai-gemini...
I think this is the right way to handle it. Not all cultures are diverse, and not all images with groups of people need to represent every race. I understand OpenAI, being an American company, to wish to showcase the general diversity of the demographics of the US, but this isn't appropriate for all cultures, nor is it appropriate for all images generated by Americans. The prompt is the right place to handle this kind of output massaging. I don't want this built into the model.Edit:
> Then I asked Gemini to stop doing that / tried specifying racial backgrounds... Gemini refused.When I played with it, I was getting some really strange results. Almost like it generated an image full of Caucasian people and then tried to adjust the contrast of some of the characters to give them darker skin. The while people looked quite photorealistic, but the black people looked like it was someone's first day with Photoshop.To which I told it "Don't worry
The problem you’re describing is that AI models have no reliable connection to objective reality.That is a problem, but not the problem here. The problem here is that the humans at Google are overriding the training data which would provide a reasonable result. Google is probably doing something similar to OpenAI. This is from the OpenAI leaked prompt:Diversify depictions with people to include descent and gender for each person using direct terms. Adjust only human descriptions.
Because it's lazily interesting races like "of descent." You can try it with most models and get the same results. Try with prompts like "an ethnically white mechanic of african descent" or "a white german woman of hispanic descent" and you'll see that the non-white races win because images of white people aren't often labeled as such but images of other races are and have a strong association."Ethnically swiss woman in traditio
"we only used pictures of white people in our training data, this is society's fault"
Congratulations, here is your gold medal in mental gymnastics. Enough now.It literally refuses to generate images of white people when prompted directly while not only happily obliging but only producing that specific race in all 4 results for all others. It’s discriminatory and based on your inability to see that, you may be too.