Editor’s Note: This excerpt is the third in a four-part series from Pamela McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia. All excerpts are shared with permission from the author.
Science and technology and traditional “humanities” topics such as language, government, ethics, and writing, influence and inform our daily lives. We’re often unaware of how they intersect, although our educational system places a clear dividing line between the two different “cultures”: the hard sciences and the humanities. A literature scholar and novelist, Pamela McCorduck transcends those conceptual boundaries and turns a critical eye on the divide. In her 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, McCorduck relates how she’s seen the two cultures converge in the field of artificial intelligence. Humanists and computer scientists alike ask questions about what exactly “intelligence” is and what it means to be human.
In the excerpt below, McCorduck explores how privileged white males’ dominance of Western culture is challenged by the idea of an artificial intelligence. Unlike women and people in less powerful positions, they aren’t used to having their worldview unconsidered, undervalued, or superseded. Interestingly, McCorduck notes that IBM’s Watson is characterized as a “he.” We should all be wary of attributing a male gender to a new type of non-human intelligence with potentially superior abilities lest we perpetuate existing stereotypes and harmful power dynamics.
We must also apply a humanistic lens to the ongoing development of AI and machine learning, thinking deeply about incorporating in them “the FATES”: fairness, accountability, transparency, ethics, security, and safety. If we don’t, the “autonomous revolution” of AI could indeed lead to the dystopian world decried by threatened white males. But if we do, it just might bring a new kind of utopia, one with equal promise for all human beings.
In the mid-teens of the 21st century, a startling efflorescence appeared of declarations, books, articles, and reviews. (Typical titles: “The Robots Are Winning!” “Killer Robots are Next!” “AI Means Calling Up the Demons!” “Artificial Intelligence: Homo sapiens will be split into a handful of gods and the rest of us.”) Even Henry Kissinger (2018) tottered out of the twilight of his life to declare that AI was the end of the Enlightenment, a declaration to give pause for many reasons.
The profound, imminent threat AI made to privileged white men caused this pyrexia. I laughed to friends, “These guys have always been the smartest one on the block. They really feel threatened by something that might be smarter.” Because most of these privileged white men admitted AI had done good things for them (and none of them so far as I know was willing to give up his smartphone), they brought to mind St. Augustine: “Make me chaste, oh Lord, but not yet.”
Very few women took this up the same way (you’d think we don’t worry our pretty heads). One who did, Louise Aronson, a specialist in geriatric medicine (2014), dared to suggest that robot caregivers for the elderly might be a positive thing, but Sherry Turkle (2014), another woman who responded to Aronson’s opinion piece in The New York Times with a letter to the editor, worried that such caregivers only simulated caring about us. That opened some interesting questions about authentic caring and its simulation even among humans, but didn’t address the issues around who would do this caregiving and how many of those caregivers society could afford.
As I read this flow of heated declarations about the evils of AI, ranging from the thoughtful to the personally revealing to the pitifully derivative—a Dionysian eruption if ever there was one—I remembered the brilliant concept, described and named by the film critic, Laura Mulvey, in 1975: the male gaze. She coined it to describe the dominant mode of filmmaking: a narrative inevitably told from a male point of view, with female characters as bearers, not makers, of meaning. Male filmmakers address male viewers, she argued, soothing their anxieties by keeping the females, so potent with threat, as passive and obedient objects of male desire. (The detailed psychoanalytic reasoning in her article you must read for yourself.)
In many sentences of Mulvey’s essay, I could easily substitute AI for women: AI signifies something other than (white or Asian) male intelligence and must be confined to its place as a bearer not a maker of meaning. To the male gaze, AI is object; its possible emergence as autonomous subject, capable of agency, is frightening and must be prevented, because its autonomy threatens male omnipotence, male control (at least those males who fret in popular journals and make movies). Maybe that younger me who hoped AI might finally demolish universal assumptions of male intellectual superiority was on to something.
The much older me knows that if AI poses future problems (how could it not?) it already improves and enhances human intellectual efforts and has the potential to lift the burden of petty, meaningless, often backbreaking work from humankind. Who does a disproportionate share of that petty, meaningless, backbreaking work? Let a hundred Roombas bloom.1
But the handwringing said that people were at last taking AI seriously.
Another great change I’ve seen is the shift of science from the intellectual perimeters of my culture to its center. (Imagine C. P. Snow presenting his Two Cultures manifesto now. Laughable.) These days, not to know science at some genuine level is to forfeit your claims to the life of the mind. That shift hasn’t displaced the importance of the humanities. As we saw with the digital humanities—sometimes tentative, sometimes ungainly, the modest start of something profound—the Two Cultures are reconciling, recognizing each other as two parts of a larger whole, which is what it means to be human. Not enough people yet know that a symbol-manipulating computer could be a welcome assistant to thinking, whether about theoretical physics or getting through the day.
AI isn’t just for science and engineering, as in the beginning, but reshapes, enlarges, and eases many tasks. IBM’s Watson, for instance, stands ready to help in dozens of ways, including artistic creativity: the program (“he” in the words of both his presenter and the audience) was a big hit at the 2015 Tribeca Film Festival when it was offered as eager colleague to filmmakers (Morais, 2015).
At the same time, AI also complicates many tasks. If an autonomous car requires millions of lines of code to operate, who can detect when a segment goes rogue? Mary Shaw, the Alan J. Perlis professor of computer science and a highly-honored software expert, worries that autonomous vehicles are moving too quickly from expert assistants beside the wheel and responsible for oversight, to ordinary human drivers responsible for oversight, to full automation without oversight. She argues that we lack enough experience to make this leap. Society would be better served by semi-autonomous systems that keep the vehicle in its lane, observe the speed limit, and stay parked when the driver is drunk. A woman pushing a bike, its handles draped with shopping bags, was killed by an autonomous vehicle because who anticipated that? If software engineering becomes too difficult for humans, and algorithms are instead written by other algorithms, then what? (Smith, 2018). Who gets warned when systems “learn” but that learning takes them to places that are harmful to humans? What programming team can anticipate every situation an autonomous car (or medical system, or trading system, or. . .) might encounter? “Machine learning is inscrutable,” Harvard’s James Mickens says (USENIX, 2018). What happens when you connect inscrutability to important real-life things, or even what he calls “the Internet of hate” also known as simply the Internet? What about AI mission creep?2
Columbia University’s Jeanette Wing has given thought to these issues and offers an acronym: FATES. It stands for all the aspects that must be incorporated into AI, machine learning in particular: Fairness, Accountability, Transparency, Ethics, Security, and Safety. Those aspects should be part of every data scientist’s training from day one, she says, and at all levels of activity: collection, analysis, and decision-making models. Big data is already transforming all fields, professions, and sectors of human activity, so everyone must adhere to FATES from the beginning.
But fairness? In real life, multiple definitions exist.
Accountability? Who’s responsible is an open question at present, but policy needs to be set, compliance must be monitored, and violations exposed, fixed, and if necessary, fined.
Transparency? Assurances of why the output can be trusted are vital, but we already don’t fully understand how some of the technology works. That’s an active area of research.
Ethics? Sometimes a problem has no “right” answer, even when the ambiguity might be encoded. Microsoft has the equivalent of an institutional review board (IRB) to oversee research (Google’s first IRB fell apart publicly after a week), but firms aren’t required to have such watchdogs, nor comply with them. According to Wing, a testing algorithm for deep learning, DeepXplore, recently found thousands of errors, some of them fatal, in fifteen state-of-the-art data neural networks in ImageNet and in software for self-driving cars. Issues around causality versus correlation have hardly begun to be explored.
Safety and security? Research in these areas is very active, but not yet definitive.
This could be important.
So I said again and again over my lifetime. Now we know. AI applications arrive steadily. Some believe we’ll eventually have indefatigable, smart, and sensitive personal assistants to transform and enhance our work, our play, our lives. Researchers are acting on those beliefs to bring such personal assistants about: the Guardian Angel, Maslow, Watson. With such help, humans could move into an era of unprecedented abundance and leisure. Others cry halt! Jobs are ending! Firms and governments are spying on our every move! The machines will take over! They want our lunch! They lack human values! It will be awful!3
Which will it be?
Look for the final excerpt from This Could Be Important on April 22.
Pamela McCorduck was present at the creation. As a student working her way through college at Berkeley, she was pulled into a project to type a textbook manuscript for two faculty members in 1960, shortly before she was set to graduate. The authors, Edward Feigenbaum and Julian Feldman, happened to be two of the founding fathers of artificial intelligence. For McCorduck, it was the start of a life-long quest to understand—and document for the rest of us—the key players, underlying ideas, and critical breakthroughs in this transformative technology.
Part memoir, part social history, and part biography, McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, shares both personal anecdotes of the giants of AI and insightful observations from the perspective of a humanities scholar. Brought to readers by Carnegie Mellon University Press, CHM is thrilled to provide a series of four telling excerpts from the book.
Pamela McCorduck is the author of eleven published books, four of them novels, seven of them non-fiction, mainly about aspects of artificial intelligence. She lived for forty years in New York City until family called her back to California where she now lives in the San Francisco Bay Area.