Chm Blog Behind the Scenes

Panic and Privilege

By CHM Editorial | April 09, 2020

Editor’s Note: This excerpt is the third in a four-part series from Pamela McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia. All excerpts are shared with permission from the author.

Science and technology and traditional “humanities” topics such as language, government, ethics, and writing, influence and inform our daily lives. We’re often unaware of how they intersect, although our educational system places a clear dividing line between the two different “cultures”: the hard sciences and the humanities. A literature scholar and novelist, Pamela McCorduck transcends those conceptual boundaries and turns a critical eye on the divide. In her 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, McCorduck relates how she’s seen the two cultures converge in the field of artificial intelligence. Humanists and computer scientists alike ask questions about what exactly “intelligence” is and what it means to be human. 

In the excerpt below, McCorduck explores how privileged white males’ dominance of Western culture is challenged by the idea of an artificial intelligence. Unlike women and people in less powerful positions, they aren’t used to having their worldview unconsidered, undervalued, or superseded. Interestingly, McCorduck notes that IBM’s Watson is characterized as a “he.” We should all be wary of attributing a male gender to a new type of non-human intelligence with potentially superior abilities lest we perpetuate existing stereotypes and harmful power dynamics. 

We must also apply a humanistic lens to the ongoing development of AI and machine learning, thinking deeply about incorporating in them “the FATES”: fairness, accountability, transparency, ethics, security, and safety. If we don’t, the “autonomous revolution” of AI could indeed lead to the dystopian world decried by threatened white males. But if we do, it just might bring a new kind of utopia, one with equal promise for all human beings.

bar divider

In the mid-teens of the 21st century, a startling efflorescence appeared of declarations, books, articles, and reviews. (Typical titles: “The Robots Are Winning!” “Killer Robots are Next!” “AI Means Calling Up the Demons!” “Artificial Intelligence: Homo sapiens will be split into a handful of gods and the rest of us.”) Even Henry Kissinger (2018) tottered out of the twilight of his life to declare that AI was the end of the Enlightenment, a declaration to give pause for many reasons.

The profound, imminent threat AI made to privileged white men caused this pyrexia. I laughed to friends, “These guys have always been the smartest one on the block. They really feel threatened by something that might be smarter.” Because most of these privileged white men admitted AI had done good things for them (and none of them so far as I know was willing to give up his smartphone), they brought to mind St. Augustine: “Make me chaste, oh Lord, but not yet.”

Very few women took this up the same way (you’d think we don’t worry our pretty heads). One who did, Louise Aronson, a specialist in geriatric medicine (2014), dared to suggest that robot caregivers for the elderly might be a positive thing, but Sherry Turkle (2014), another woman who responded to Aronson’s opinion piece in The New York Times with a letter to the editor, worried that such caregivers only simulated caring about us. That opened some interesting questions about authentic caring and its simulation even among humans, but didn’t address the issues around who would do this caregiving and how many of those caregivers society could afford.

As I read this flow of heated declarations about the evils of AI, ranging from the thoughtful to the personally revealing to the pitifully derivative—a Dionysian eruption if ever there was one—I remembered the brilliant concept, described and named by the film critic, Laura Mulvey, in 1975: the male gaze. She coined it to describe the dominant mode of filmmaking: a narrative inevitably told from a male point of view, with female characters as bearers, not makers, of meaning. Male filmmakers address male viewers, she argued, soothing their anxieties by keeping the females, so potent with threat, as passive and obedient objects of male desire. (The detailed psychoanalytic reasoning in her article you must read for yourself.)

In many sentences of Mulvey’s essay, I could easily substitute AI for women: AI signifies something other than (white or Asian) male intelligence and must be confined to its place as a bearer not a maker of meaning. To the male gaze, AI is object; its possible emergence as autonomous subject, capable of agency, is frightening and must be prevented, because its autonomy threatens male omnipotence, male control (at least those males who fret in popular journals and make movies). Maybe that younger me who hoped AI might finally demolish universal assumptions of male intellectual superiority was on to something.

The much older me knows that if AI poses future problems (how could it not?) it already improves and enhances human intellectual efforts and has the potential to lift the burden of petty, meaningless, often backbreaking work from humankind. Who does a disproportionate share of that petty, meaningless, backbreaking work? Let a hundred Roombas bloom.1

But the handwringing said that people were at last taking AI seriously.

Another great change I’ve seen is the shift of science from the intellectual perimeters of my culture to its center. (Imagine C. P. Snow presenting his Two Cultures manifesto now. Laughable.) These days, not to know science at some genuine level is to forfeit your claims to the life of the mind. That shift hasn’t displaced the importance of the humanities. As we saw with the digital humanities—sometimes tentative, sometimes ungainly, the modest start of something profound—the Two Cultures are reconciling, recognizing each other as two parts of a larger whole, which is what it means to be human. Not enough people yet know that a symbol-manipulating computer could be a welcome assistant to thinking, whether about theoretical physics or getting through the day.

AI isn’t just for science and engineering, as in the beginning, but reshapes, enlarges, and eases many tasks. IBM’s Watson, for instance, stands ready to help in dozens of ways, including artistic creativity: the program (“he” in the words of both his presenter and the audience) was a big hit at the 2015 Tribeca Film Festival when it was offered as eager colleague to filmmakers (Morais, 2015).

At the same time, AI also complicates many tasks. If an autonomous car requires millions of lines of code to operate, who can detect when a segment goes rogue? Mary Shaw, the Alan J. Perlis professor of computer science and a highly-honored software expert, worries that autonomous vehicles are moving too quickly from expert assistants beside the wheel and responsible for oversight, to ordinary human drivers responsible for oversight, to full automation without oversight. She argues that we lack enough experience to make this leap. Society would be better served by semi-autonomous systems that keep the vehicle in its lane, observe the speed limit, and stay parked when the driver is drunk. A woman pushing a bike, its handles draped with shopping bags, was killed by an autonomous vehicle because who anticipated that? If software engineering becomes too difficult for humans, and algorithms are instead written by other algorithms, then what? (Smith, 2018). Who gets warned when systems “learn” but that learning takes them to places that are harmful to humans? What programming team can anticipate every situation an autonomous car (or medical system, or trading system, or. . .) might encounter? “Machine learning is inscrutable,” Harvard’s James Mickens says (USENIX, 2018). What happens when you connect inscrutability to important real-life things, or even what he calls “the Internet of hate” also known as simply the Internet? What about AI mission creep?2

Columbia University’s Jeanette Wing has given thought to these issues and offers an acronym: FATES. It stands for all the aspects that must be incorporated into AI, machine learning in particular: Fairness, Accountability, Transparency, Ethics, Security, and Safety. Those aspects should be part of every data scientist’s training from day one, she says, and at all levels of activity: collection, analysis, and decision-making models. Big data is already transforming all fields, professions, and sectors of human activity, so everyone must adhere to FATES from the beginning.

But fairness? In real life, multiple definitions exist.

Accountability? Who’s responsible is an open question at present, but policy needs to be set, compliance must be monitored, and violations exposed, fixed, and if necessary, fined.

Transparency? Assurances of why the output can be trusted are vital, but we already don’t fully understand how some of the technology works. That’s an active area of research.

Ethics? Sometimes a problem has no “right” answer, even when the ambiguity might be encoded. Microsoft has the equivalent of an institutional review board (IRB) to oversee research (Google’s first IRB fell apart publicly after a week), but firms aren’t required to have such watchdogs, nor comply with them. According to Wing, a testing algorithm for deep learning, DeepXplore, recently found thousands of errors, some of them fatal, in fifteen state-of-the-art data neural networks in ImageNet and in software for self-driving cars. Issues around causality versus correlation have hardly begun to be explored.

Safety and security? Research in these areas is very active, but not yet definitive.

This could be important.

So I said again and again over my lifetime. Now we know. AI applications arrive steadily. Some believe we’ll eventually have indefatigable, smart, and sensitive personal assistants to transform and enhance our work, our play, our lives. Researchers are acting on those beliefs to bring such personal assistants about: the Guardian Angel, Maslow, Watson. With such help, humans could move into an era of unprecedented abundance and leisure. Others cry halt! Jobs are ending! Firms and governments are spying on our every move! The machines will take over! They want our lunch! They lack human values! It will be awful!3

Which will it be?

Notes

  1. Journalist Sarah Todd wrote “Inside the surprisingly sexist world of artificial intelligence” (Quartz, October 25, 2015) about the sexism and lack of diversity in AI. The piece suggests women won’t pursue AI because it de-emphasizes humanistic goals. Maybe public fears about the field are because of the homogeneity of the field, she went on. To close the gap, schools need to emphasize the humanistic applications of AI. And so on. Although many applications of AI grow out of a sexist culture and reflect that, readers of this history can also see the fallacies in Todd’s argument. AI started out as a way of understanding human intelligence. That continues to be one of its major goals, which is why it partners with psychology and brain science. Its humanistic goals are central, whether to understand intelligence or to augment it. But all scientific and technological fields save, perhaps, the biological sciences, could use more women practitioners and more people of color. That is being addressed in many places and many ways, beyond the scope of this book, but one example is the national nonprofit AI4All, launched in 2017 by Stanford’s Fei-Fei Li and funded by Melinda Gates, which aims to make AI researchers, hence AI research, more diverse. The 2019 report from NYU says this is not enough (West et al., 2019).
  2. The video in which Mickens’ quote appears is mostly about the perils of machine learning, especially the hilariously sad story of Tay, Microsoft’s chatbot, which had to be taken down from the Internet after 16 hours because of what it was learning from its training set, the gutter of the Internet.
  3. The cries of pain and alarm are too numerous to list. Privacy, meddling, reshaping our sense of ourselves as unique, and more. About the future job market, for example, books and articles abound. See, for example, the relatively optimistic book by Erik Brynjolfsson and Andrew McAfee, Race Against the Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (Ditigal Frontier Press, 2011) or the careful quantitative study from the University of Oxford by Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerisation?” (September 17, 2013 and available via https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf). But later economists question these findings as mere extrapolation, with no allowance for new jobs that will be created. For example, Forbes.com’s Parmy Olson wrote about a PwC report on AI in “AI won’t kill the job market but keep it steady, PwC report says” (July 17, 2018).

Look for the final excerpt from This Could Be Important on April 22.

More from This Series

About "This Could Be Important"

Pamela McCorduck was present at the creation. As a student working her way through college at Berkeley, she was pulled into a project to type a textbook manuscript for two faculty members in 1960, shortly before she was set to graduate. The authors, Edward Feigenbaum and Julian Feldman, happened to be two of the founding fathers of artificial intelligence. For McCorduck, it was the start of a life-long quest to understand—and document for the rest of us—the key players, underlying ideas, and critical breakthroughs in this transformative technology. 

Part memoir, part social history, and part biography, McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, shares both personal anecdotes of the giants of AI and insightful observations from the perspective of a humanities scholar. Brought to readers by Carnegie Mellon University Press, CHM is thrilled to provide a series of four telling excerpts from the book.

About Pamela McCorduck

Pamela McCorduck is the author of eleven published books, four of them novels, seven of them non-fiction, mainly about aspects of artificial intelligence. She lived for forty years in New York City until family called her back to California where she now lives in the San Francisco Bay Area.

About The Author

CHM Editorial consists of editors, curators, writers, educators, archivists, media producers, researchers, and web designers, looking to bring CHM audiences the best in technology and Museum news.

Join the Discussion

Share

FacebookTwitterCopy Link