AI Is All the rage, but What About the Baked-In Bias?

Jun 14, 2023

Artificial Intelligence would tell us that crimes are always committed by men with dark skin. Women are rarely doctors, lawyers or judges, and women with dark skin flip burgers. 

This is not the world we live in, but it is the world that Stable Diffusion, one of the largest open-source platforms for AI-generated images, creates when prompted. This technology amplifies gender and racial stereotypes to extremes worse than in the real world, and that is cause for concern.

The article, “Humans are biased, generative AI is even worse”¹ is the result of months of reporting by Leonardo Nicoletti and Dina Bass for Bloomberg Technology + Equality.  Their research is extensive, and the findings are critical. 

Their research revealed that while 34% of US judges are women, only 3% of the images generated for the keyword “judge” were perceived women. For fast-food workers, the platform generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are white. For every image of a lighter-skinned person generated with the keyword “inmate,” the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color. 

As Nicoletti explained, “We wanted to understand how deeply ingrained biases might be in this technology. So, we asked it to create thousands of images of workers for 14 jobs and also different criminalized categories, and then we analyzed the results. What we found was really a systemic pattern of racial and gender bias that doesn’t just replicate stereotypes, but it actually makes them worse. It stretches them to extremes worse than those found in the real world. Women and people with darker skin tones were underrepresented across images of high-paying jobs and overrepresented for low-paying ones, for example.”

We’re not talking about a stock image library; this is a new technology that responds to text prompts to create images that promote stereotypes and bias. Rather than using logic and experience to dispel and counter stereotypes, people who see these AI-generated images are going to be conditioned to see the world a certain way — a biased way, based on algorithms that were created (and worse yet, magnified) by people who, like all of us, have biases. 

The popularity of generative AI means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are becoming increasingly difficult to distinguish from real photographs. Yet how can developers be expected to code in a way that safeguards us? After all, they, just like the rest of us, have their own implicit biases.

Our award-winning eLearning, Defeating Unconscious Bias, offers five strategies. One example is to be aware of your first thoughts and your first associations and to reflect on them.  What are your first impressions and judgments — and are they accurate? Are they generalized about “that type of person,” which means the group to which they tend to be categorized? Or are they specific and individualized based on that particular person? 

Could the developers build into machine learning a way for the platform to reflect and check itself in a similar manner? This is doubtful because it requires a perspective that is difficult for humans. Unless and until AI can do that, however, it would be prudent to have a warning label on all AI-generated images – “WARNING: this is an AI-generated image and may perpetuate harmful stereotypes and bias.” In fact, many key AI experts are calling for laws that require such labels. In the meantime, we would all be well-served to keep an eye on this, especially in light of the exponential rate at which this technology is advancing. Education that helps us overcome unconscious bias is particularly important in this regard.

1. https://www.bloomberg.com/graphics/2023-generative-ai-bias/?leadSource=uverify%20wall#xj4y7vzkg

(Should you wish to read this article, which is behind a paywall, you may sign up for a free account.)

“Every part of the process in which a human can be biased, AI can also be biased.” – Nicole Napolitano, Center for Policing Equity

More From Our Blog…

DEI LEAP: Empowering Leaders Through Turbulent Times

DEI LEAP: Empowering Leaders Through Turbulent Times

DEI LEAP: Empowering Leaders Through Turbulent Times ​As we all know, 2024 has brought a wave of attacks against DEI. A handful of outspoken critics, such as Elon Musk, are misrepresenting DEI and attacking the strategies and practices that are creating more equitable...

read more
The Colorblindness Trap

The Colorblindness Trap

Read. This. Article. It's important. The Color Blindness Trap: How a civil rights ideal got hijacked Nikole Hannah-Jones is a domestic correspondent for The New York Times Magazine focusing on racial injustice. Her extensive reporting in both print and radio has...

read more
“Laying Low” Is the Wrong DEI Strategy

“Laying Low” Is the Wrong DEI Strategy

“The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy.”- Martin Luther King, Jr. In an era marred by politicized attacks on diversity, equity and inclusion (DEI), Shaun...

read more
Painful Stories: Unconscious Bias in Health Care

Painful Stories: Unconscious Bias in Health Care

Many Black women express a sense of being ignored or dismissed by healthcare professionals. The repercussions of such experiences can be life-threatening for both mothers and babies. The New York Times has published a new article that takes a closer look at this...

read more
Navigating Thanksgiving – Speak Up or Stay Silent?

Navigating Thanksgiving – Speak Up or Stay Silent?

Tis the day before Thanksgiving and all through the land, people are anxious. Amidst the joy of the holiday—cherishing togetherness, celebrating with family and friends, and expressing gratitude— there's a potential for discomfort as opinions are shared and points of...

read more