Chm Blog CHM Live

Mining More Than Data

By Emily Parsons | June 15, 2021

The Hidden Costs of AI

The digital assistant sitting on your living room table can tell you a lot—if it’s likely to rain today, when your next appointment is, or how to make a chocolate cake—but can it tell you its life story? Before it arrived on your doorstep, its components were mined from the earth, then smelted, assembled, packaged, and shipped by workers. While you own it, it collects data from you that can be used to train artificial intelligence systems. And after you dispose of it, the device will move on to an e-waste dump in a place like Ghana or Pakistan.

Through the “Anatomy of AI” project, USC Annenberg professor Kate Crawford discovered how a single Amazon Echo draws on planetary resources, human labor, and data throughout its lifetime in ways that are invisible to most consumers. The project ignited her interest in the broad impacts of artificial intelligence. Though it is made to seem magical, says Crawford, AI depends heavily on materials and people. It is, she argues, an extractive industry.

In a virtual CHM Live event on May 27, 2021, Crawford shared insights about the local and global impacts of AI from her new book, Atlas of AI, with New York Times technology reporter Kashmir Hill.

AI is Not Artificial

Devices like the Amazon Echo are designed to look clean and fresh, giving users no notion of their ecological toll. But AI relies on layers of minerals and energy.

The rare earth metal lithium, for example, is crucial to the rechargeable batteries that power so much of our technology, from iPhones to Tesla cars. We assume that mineral resources can continually drive planetary computation, but we are reaching a critical endpoint. In the case of lithium, we already face an availability crisis. If we do not change our consumption habits, warns Crawford, we could run out of lithium as early as 2040.

Despite this alarming reality, marketing has contributed to a perception of AI as a sort of magical and otherworldly technology. At the same time, we are also expected to trust AI to make important decisions with very real consequences, affecting criminal justice, education, hiring, and other areas. Crawford and the academic Alex Campolo call this dialectic “enchanted determinism.”

Kate Crawford defines “enchanted determinism.”

Fauxtomation

So many debates about the future of work under AI focus on the idea of people being replaced by robots, says Crawford, but not enough focus has been given to the way that people are being treated like robots.

Increasingly, employers use AI to observe and track workers in order to extract the most value from them. AI systems can track work efficiency through the number of emails written or the number of meetings taken, record faces on workplace cameras, or even track wellness stats to make presumptions about employee health.

In other cases, workers actually are the AI. It turns out that some “automated” systems are not really automated at all—hundreds or thousands of low-paid human workers perform microtasks on crowdsourced platforms to make systems appear as if they are artificially intelligent. For example, an “AI” personal assistant produced by x.ai is, in reality, lots of people doing exhausting labor for only a few cents per task. The writer Astra Taylor calls this phenomenon “fauxtomation.”

Kashmir Hill describes her investigation of the app Invisible Girlfriend.

Why do tech companies work so hard to maintain the illusion of automation? The idea of a purely autonomous system can attract attention from funders and consumers, but AI is not as advanced as many people believe it is. AI is not intelligent in the same way as humans, Crawford says, and much of what we consider to be “artificially intelligent” today is built on repetitive human actions.

Bias: Tip of the Iceberg?

A credit worthiness algorithm that downranked women; systems that failed to recognize people with darker skin tones; voice recognition that didn’t respond to female-sounding voices. These are just some examples of how bias can create negative outcomes in AI systems. Yet this goes deeper than the biases of individual researchers or engineers. It starts with the datasets they use to train AI systems.

Consisting of thousands, millions, or billions of pieces of data, like photographs or text, training datasets become “ground truth” for an AI system. Engineers are likely to think of datasets as tools in a toolbox, applying them without looking inside or knowing exactly what kinds of data they contain. In this way, they can unwittingly place flawed data at the core of their systems.

Kate Crawford explains how bias gets built into AI systems.

Training set data can be collected in problematic ways. Datasets created by the National Institute of Standards and Technology (NIST), for example, contained mugshots of people who had been arrested and could not consent to having their photos taken, let alone used in a training set. If there is a photo of you on the internet, chances are that you, too, are part of an AI training dataset without your knowledge or permission. Clearview AI, which created the world’s largest face training dataset, has scraped at least three billion images of people from the internet.

In her “Excavating AI” project with Trevor Paglen, Crawford spent two years opening up hundreds of training datasets. In ImageNet, one of the most influential training sets in the history of machine learning, Crawford found classifications of people that were racist, misogynistic, offensive, or even illogical. This data had been used to train technical systems for over a decade.

660,000 images have since been removed from ImageNet, including many of the “people” categories that Crawford had critiqued. Though removing these categories can help create more equitable AI in the future, AI systems that were trained on them are still operating in the world, and may continue to cause problematic outcomes. When the original data is no longer viewable, how can we investigate the causes of these outcomes?

Kate Crawford describes what happens when AI training datasets are removed or edited.

We Need Many Atlases

In 1966, MIT’s Summer Vision Project set out to attach labels to images in order to recognize any object in the world. This work would take much longer than one summer—in fact, the challenge is still unsolved.

Crawford recognizes that Atlas of AI is not comprehensive and notes, “We need many atlases.” Books that allow us to view things at different scales, from entire continents to specific towns, can help us better understand the scale of AI, from its planetary reach to its localized impacts on individuals and communities.

About The Author

Emily Parsons is a researcher and content developer for the Exponential Center at the Computer History Museum. Her research on entrepreneurs, venture capitalists, and other key figures in Silicon Valley’s innovation ecosystem contributes to the development of educational content for a variety of Exponential Center initiatives.

Join the Discussion

Share

FacebookTwitterCopy Link