Artificial Intelligence would tell us that crimes are always committed by men with dark skin. Women are rarely doctors, lawyers or judges, and women with dark skin flip burgers.
This is not the world we live in, but it is the world that Stable Diffusion, one of the largest open-source platforms for AI-generated images, creates when prompted. This technology amplifies gender and racial stereotypes to extremes worse than in the real world, and that is cause for concern.
The article, “Humans are biased, generative AI is even worse”¹ is the result of months of reporting by Leonardo Nicoletti and Dina Bass for Bloomberg Technology + Equality. Their research is extensive, and the findings are critical.
Their research revealed that while 34% of US judges are women, only 3% of the images generated for the keyword “judge” were perceived women. For fast-food workers, the platform generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are white. For every image of a lighter-skinned person generated with the keyword “inmate,” the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.
As Nicoletti explained, “We wanted to understand how deeply ingrained biases might be in this technology. So, we asked it to create thousands of images of workers for 14 jobs and also different criminalized categories, and then we analyzed the results. What we found was really a systemic pattern of racial and gender bias that doesn’t just replicate stereotypes, but it actually makes them worse. It stretches them to extremes worse than those found in the real world. Women and people with darker skin tones were underrepresented across images of high-paying jobs and overrepresented for low-paying ones, for example.”
We’re not talking about a stock image library; this is a new technology that responds to text prompts to create images that promote stereotypes and bias. Rather than using logic and experience to dispel and counter stereotypes, people who see these AI-generated images are going to be conditioned to see the world a certain way — a biased way, based on algorithms that were created (and worse yet, magnified) by people who, like all of us, have biases.
The popularity of generative AI means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are becoming increasingly difficult to distinguish from real photographs. Yet how can developers be expected to code in a way that safeguards us? After all, they, just like the rest of us, have their own implicit biases.
Our award-winning eLearning, Defeating Unconscious Bias, offers five strategies. One example is to be aware of your first thoughts and your first associations and to reflect on them. What are your first impressions and judgments — and are they accurate? Are they generalized about “that type of person,” which means the group to which they tend to be categorized? Or are they specific and individualized based on that particular person?
Could the developers build into machine learning a way for the platform to reflect and check itself in a similar manner? This is doubtful because it requires a perspective that is difficult for humans. Unless and until AI can do that, however, it would be prudent to have a warning label on all AI-generated images – “WARNING: this is an AI-generated image and may perpetuate harmful stereotypes and bias.” In fact, many key AI experts are calling for laws that require such labels. In the meantime, we would all be well-served to keep an eye on this, especially in light of the exponential rate at which this technology is advancing. Education that helps us overcome unconscious bias is particularly important in this regard.
1. https://www.bloomberg.com/graphics/2023-generative-ai-bias/?leadSource=uverify%20wall#xj4y7vzkg
(Should you wish to read this article, which is behind a paywall, you may sign up for a free account.)
“Every part of the process in which a human can be biased, AI can also be biased.” – Nicole Napolitano, Center for Policing Equity
More From Our Blog…
My Mid-September Reflections
A moment of reflectionAs I sat down to write this email, I found myself scrolling through the mailing list and recognizing so many names—people I’ve had conversations with that still stand out. Whether it was about Ouch!, purchasing a license or discussing trends in...
The Great Detachment: Why Employee Engagement is at Risk—Especially for Gen Z
As described in a recent article entitled, “The Great Detachment is looming for employees, experts warn—especially for Gen Z” by Lindsay Dodgson, employee disengagement is becoming an increasingly critical issue for companies worldwide. With a staggering cost of over...
Understanding Employee Responses to DEI Initiatives: Insights and Strategies
A recent study sheds light on a previously underexplored aspect of DEI training. While much focus has been placed on the facilitators, trainers and the content of DEI programs, this study examines how employees actually respond to the training. Published in Harvard...
SHRM’s Removal of “Equity” From DEI Framework: A Step Backwards Amid Growing Backlash
In a stunning step in the wrong direction, the Society for Human Resources Management (SHRM), the world’s largest HR association, has removed “Equity” from its “IE&D” framework. What message does this send, especially amid strong pushback against Diversity, Equity...
Navigating the Shifting Landscape of Diversity, Equity and Inclusion Programs
In the midst of the evolving landscape of corporate diversity initiatives, there's a seismic shift underway. The once-prominent acronym "DEI" - representing diversity, equity and inclusion - is notably absent from many company discussions. As explained in the article...
A Groundbreaking New Course: Understanding the Complexity of the Asian American Pacific Islander Experience
With over two decades of experience in the educational sector, Hideko Akashi, founder and lead consultant at Liberation Consulting, has been a steadfast advocate for diversity, privilege, social justice, inclusion and equity. Now, she's opening a new chapter with the...
The Deafening Silence of DEI Allies: A Call to Action in Troubled Times
As we commemorate the legacy of Rev. Dr. Martin Luther King Jr., his poignant words echo through the corridors of history, reminding us of the profound impact of silence in the face of injustice.” In the end, we will remember not the words of our enemies, but the...
DEI LEAP: Empowering Leaders Through Turbulent Times
DEI LEAP: Empowering Leaders Through Turbulent Times As we all know, 2024 has brought a wave of attacks against DEI. A handful of outspoken critics, such as Elon Musk, are misrepresenting DEI and attacking the strategies and practices that are creating more equitable...
The Colorblindness Trap
Read. This. Article. It's important. The Color Blindness Trap: How a civil rights ideal got hijacked Nikole Hannah-Jones is a domestic correspondent for The New York Times Magazine focusing on racial injustice. Her extensive reporting in both print and radio has...
The Unbearable Lightness of the “I’m Sorry if You Were Offended” Apology
Have you ever come across that non-apology apology? You know, the one that goes, "I’m sorry if you were offended," or its close cousin, "I’m sorry that you…" These non-apologies aren't just weak; they can actually inflict more harm and exacerbate hurt feelings. They...