Chm Blog CHM Live

Decoding Racism In Technology

By CHM Editorial | May 05, 2021

Joy Boulamwini: “joy blend weenie”

The audio transcription for CHM’s virtual event Is AI Racist? delivered “joy blend weenie” for Joy Boulamwini, an accomplished computer scientist at the MIT Media Lab, who is Black and female. Sure, her name may be hard for Americans to pronounce, but Google delivers over 17 pages of search results about her, so you might think machine learning systems had plenty of data to teach them. The transcription algorithm also transformed Black female scholar Dr. Vilna Bashi Treitler into “Dr. Vilna bossy trailers” and interpreted White professor Miriam Sweeney as “Miriam, sweetie.” Though admittedly a very small sample size, in keeping with research on bias in AI, the system had no trouble with White and Asian male names, even “Satya.”

Exploring bias and racism in artificial intelligence, tech products and the technology industry, and in social and cultural systems, the lively panel discussion took place on April 20, 2021, following a special opportunity for the CHM community to view the new documentary Coded Bias. The panel included: Lili Cheng from Microsoft AI and Research; Charlton McIlwain, vice provost for Faculty Engagement & Development at NYU; UCLA Information Studies Assistant Professor Safiya Noble; and, computer scientist and activist Deborah Raji. CHM moderator David C. Brock began by asking, What exactly is racism?

Racism Is A Hierarchical Power Structure

Who has power and who doesn’t? Safiya Noble explains that in the US, different ethnicities are categorized into a racial hierarchy with Black on the bottom and White on top. Where you are in the hierarchy will determine the opportunities you will have—or not have—throughout your life. The racial hierarchy is deeply embedded in law and social and cultural systems. Both Lili Cheng and Deborah Raji note that tech has to do a better job addressing the ways in which this kind of bias is also embedded in tech products and perpetuated by tech companies. Charlton McIlwain has studied the long history of racism in tech and has developed a basic definition that provides a baseline for discussing these issues.

Charlton McIlwain provides a basic definition of racism.

Building on McIlwain’s definition, Noble’s work reminds us that hierarchical power structures of both race and gender intersect, leading to a Black and indigenous women being pushed to the bottom of every power system. For example, in the tech sector, although very few Black male founders receive funds from venture capitalists, the amount received by Black women founders is vanishingly small. 

AI Is Not Moral or Immoral… or Neutral

How do we understand racism in artificial intelligence and technology in general? Deborah Raji points out that people often focus on the interpersonal aspects of racism, characterizing AI like a human (just like the title of this event), but rarely are racists purposely building racist systems, although it does happen. Instead, Raji says, often engineers just aren't thinking about racial equity during the development process for a product or system. An engineer herself, she’s seen firsthand how attempts to emulate and automate human thinking through AI can reduce accountability and put marginalized people at risk, unable to appeal decisions made by an algorithm. Racism is actually coded into the product. Lili Cheng describes exactly how this happens.

Lili Cheng explains how bias can be encoded in artificial intelligence.

Uncovering Coded Bias 

So, what kinds of technologies are using artificial intelligence that has the potential to deliver biased outcomes? Safiya Noble described how she discovered that even before Google, search engines that purported to deliver knowledge and facts commodified women and girls and presented them in hyper-sexualized ways. Women of color were mostly represented pornographically. Unfortunately, these kinds of misrepresentations have continued, but no one is focusing on racism in banal technologies like search or on larger systems issues, like how digital advertising platforms have taken over delivering knowledge from libraries and universities. Instead they are looking at bias in social media.

And there’s certainly enough to worry about there. Lili Cheng described a well-known incident with a chatbox system called Tay that was developed by Microsoft to interact with students. The development team decided to try it out on Twitter, and within a few hours it had been taken over by people who taught it to be sexist, racist, xenophobic, and profane. It changed the way she viewed AI and social media forever.

Deborah Raji’s eyes were opened a few years ago during her experience working at an early startup, where a system was flagging content uploaded by people of color as "unsafe" and removing it. Interested in ethics and fairness in AI, she reached out to computer scientist Joy Boulamwini at the MIT Media Lab, one of only a few others thinking about those issues at the time.

Deborah Raji explains the problems with biased facial recognition technology.

There are real and disturbing consequences for biased AI, for example, cases of Black men misidentified by facial recognition technologies used by police that escalated to the point of arrest and organ transplant algorithms disproportionately rejecting people of color. 

Charlton McIlwain noted that IBM worked in secret for years building a facial recognition system with camera data provided by the NYPD. But, most people don’t realize that law enforcement and technology have partnered for decades to develop AI systems with racist motivations. The computational systems developed in the late ‘60s to deal with “the race problem” in poor urban areas, sowed the seeds for today’s facial recognition, location tracking, recidivism predictions, and more. There is a lot of work to be done to break down racist systems built into society’s infrastructure, McIlwain says.

Can AI Be Anti-Racist?

So, if it’s possible to build systems that mirror racism in society, can we instead build systems that intentionally promote anti-racism? Perhaps. But it is important to recognize that machines are not the neutral entities that many still believe them to be. Safiya Noble suggests that when we understand that, we can start to consider the many ways that all kinds of systems and infrastructures can be racist, and to consider technology as a part of that larger problem. Might we want to rethink the ends to which we deploy technology and the values behind the tech we develop? And, shouldn’t we include the people most likely to be harmed in those conversations?

Safiya Noble considers how to build unbiased systems and technologies.

There will not be an “algorithmic fix” to problems like racism, structural inequality, and the wealth gap, says Noble. Charlton McIlwain agrees that we’re poised to repeat the same problems if we assume tech can fix issues that need to be addressed at the level of systems and infrastructure. He says, “Fixing a line of code or a new app is not a fix.” Deborah Raji, however, sees opportunities on the path forward—as long as we listen to the marginalized voices that often see the risks and harms in tech long before anyone else. Stop ignoring or silencing them and then escaping accountability by insisting that “no one could have seen it coming,” she says. Raji believes regulatory and legal structures must hold AI developers and companies accountable when they harm certain groups in systematic ways. Today, there is no incentive for them to evolve in how they address these issues.

Safiya Noble offers a way forward for those committed to change: reimagine what cross-disciplinary teams in tech should look like. They should include social scientists, humanists, historians, experts in gender and ethnic studies, and others who think deeply about society. And they should be given as much power over the product as engineers and coders. Could tech then, perhaps, meet its potential to serve all of humanity?

Watch the full conversation

Is AI Racist? | CHM Live, April 20, 2021

About The Author

CHM Editorial consists of editors, curators, writers, educators, archivists, media producers, researchers, and web designers, looking to bring CHM audiences the best in technology and Museum news.

Join the Discussion

Share

FacebookTwitterCopy Link