| Category: General
Not even gonna lie, I fuckin’ lol’d.
Well that’s what happens when the data models you train it on are all white people.
Kim to Kimberly
I guess the AI knows how racist we are.
Linh to Lindsey
My colorist Asian mom would like to know where she can contact this AI.
Keep things like this in mind for that moment when someone tells us AI is impartial, unbiased and not racist.
To be honest Playground AI image editor failed the task not only because of changing race of the subject but also by not producing a professional Linkedin photo that was requested.
AI is fed from the internet data, which is like a sewer. What do you expect is going to happen?
AI trained on stock photo of “professionals” makes her white. Are we surprised?
She asked the AI to make her headshot more professional. Most of “professional” stock photos on the internet have white people in them.
If she had asked the AI to turn her photo into an NBA player, it would have turned her black.
These tools are only as good as the data they are trained on
I didn’t pay for it but a lot of the free AI confuse my face too for some reason. It makes me Asian or black a lot (even though I’m white, of Nordic background). I assumed it was the bias of the AI in some way. That is the AI was developed then trained in Asia or something.
Playground AI’s CEO Suhail Doshi’s response was pretty interesting:
>If I roll a dice just once and get the number 1, does that mean I will always get the number 1? Should I conclude based on a single observation that the dice is biased to the number 1 and was trained to be predisposed to rolling a 1?
This is like the time during the pandemic where my university had an AI proctor to look for cheating. The thing flagged me and all the other Asian students the entire time because our eyes were “supposedly looking elsewhere from the screen”
> I haven’t gotten any usable results from AI photo generators or editors yet, so I’ll have to go without a new LinkedIn profile photo for now!
It boggles the mind that just taking another photo isn’t even in the universe of options considered. I work with generative AI. It delivers impressive results sometimes, which is why there’s so much hype. But they’re also often goofy and wrong. The training data contains lots of biases (though people who claim that means it’s “racist” are inappropriately applying a human sociological context to a piece of software). It’ll be years before we figure out how to round the edges off and manage the risks, and there will always be some edge cases with hilarious failure modes. If this person feels unable to update their LinkedIn profile until all that stabilizes, they’re in for a long wait.
Sure, now go to the [most popular Stable Diffusion model website](https://civitai.com/) and look at the images on the front page.
You’ll see an absurd number of asian women (almost 50% of the non-anime models are represented by them) to the point where you’d assume being asian is a desired trait.
How is that less relevant that “one woman typed a dumb prompt into a website and they generated a white woman”?
Also keep in mind that she typed “Linkedin”, so anyone familiar with how prompts currently work know it’s more likely that the AI searched for the average linkedin woman, not what it thinks is a professional women because *image AI doesn’t have an opinion.*
In short, this is just an AI ragebait article.