AI Is Already Here, For Good Or Bad

By CHM Editorial | March 11, 2020

Editor’s Note: This excerpt is the first in a four-part series from Pamela McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia. All excerpts are shared with permission from the author.

“AI is us,” writes Pamela McCorduck in her 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia. From the smartphone in your pocket to the car that helps you park and the robot that cleans your floors, we’ve assimilated AI into our lives. But many of us still fear that the machines will “take over” and find the idea of non-human intelligence deeply disturbing. Steeped in the history of AI, McCorduck understands those concerns and the very real dangers inherent in AI in terms of security and privacy and algorithms that reflect their creators’ bias, but she remains optimistic. 

In the excerpt below, McCorduck explores the promise and perils of AI and robots as new forms of intelligence that will be even smarter than humans. This doesn’t have to be scary. Since the dawn of AI, humans’ understanding of the intelligence of other living things has expanded to include “primates, cetaceans, elephants, dogs, cats, racoons, parrots, rodents, bumblebees, and even slime molds.” There are trees that send chemical warnings to each other when beetles attack, arguably a form of intelligence. “Entire books appear on comparative intelligence across species, trying to tease out what’s uniquely human. It isn’t obvious,” McCorduck writes.

How should we consider our new understanding of entities other than humans that appear intelligent? Isn’t it possible our knowledge has made us ever more aware of and concerned about protecting our fellow creatures and the environment we all share? And so, why wouldn’t an exponentially smarter artificial intelligence make better decisions for our welfare than those we are capable of making ourselves?

It will ask questions we don’t even know how to ask. It will think the things we are incapable of thinking. It will experience and feel the things that we aren’t capable of. Yes, I believe that will happen eventually. We think of AI in terms of personal gadgets—my search engine will be better, my car will drive itself, my doctor will be better able to heal me, my grandma can be safely left home alone as she ages, a robot will finally do the housework. But greater contributions of AI will be planetary, teasing out how the environment and human wellbeing are subtly intertwined. Al’s greatest contribution might be its fundamental role in understanding and illuminating the laws of intelligence, wherever it manifests itself, in organisms or machines.

 For a long time, I’ve been comfortable with such ideas. Unduly optimistic, maybe, but I look forward to having other, smarter minds around. (I’ve always had such minds around in the form of other humans.) I don’t much worry they’ll want my niche—though that presupposes a planet that won’t, in one of Bostrom’s scenarios, be tiled over entirely with solar panels to supply power for the reigning AI. Humans will endure, but possibly not as the dominant species. Instead, that position might belong someday to our non-biological descendants. But really, the scary future scenarios sound as if humans have no agency here. We certainly do, and as we’ll see, it’s already at work.

A search (powered by AI techniques, of course) will quickly show how we’ve already woven AI around and inside our lives, turning scientific inquiry into human desire, even stark necessity. When we did not—the nuclear catastrophe of Fukushima Daiichi, for example—we wished we had. AIs fly, crawl, inhabit our personal devices, connect us with each other willingly or not, shape our entertainment, and vacuum the living room.

Robots, a particularly visible form of AI (embodied, in the field’s term), occupy a significant space in our imaginations, their very birthplace. Books, movies, TV, and video games provoke us to conjecture about some of the ways we might behave and the issues embodied AIs will raise when they become our companions. But this visible embodiment, humanoid or otherwise, is only one form AIs will take. The disembodied, more abstract intelligences, like Google Brain, AlphaZero, and Nell1 at Carnegie Mellon are hidden inside machines invisible to the human eye, scoffing at human boundaries. Their implications are even more profound.

Distributed intelligence and multiagent software inhabit electronic systems all over the globe, seizing information that can be studied, analyzed, manipulated, redistributed, re-presented, exploited, above all, learned from. Human knowledge and decision-making are rapidly moving from books and craniums into executable computer code. But fair warnings and deep fears abound: algorithms that take big data at face value reinforce the bigotries those data already represent. Bad enough that data about you you’re aware you’re volunteering (submitted for drivers’ licenses for example) are collected, aggregated, and marketed; much worse that involuntary data collected from your on-line behavior (your purchases, your use of public transportation) is also a profit center and a spy on you. Horrors have crawled up from the dark side: bots that lie and mislead across social media, trolls without conscience, and applications whose unforeseeable consequences could be catastrophic.

Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology, on the campus of the University of California at San Diego, calls this distributed intelligence and multiagent software the equivalent of a global computer. “People just do not understand how different a global AI will be with all the data about all the people being fed in real time,” he emailed me a few years ago. By sharing data, he continued, the whole world is helping to create AI at top speed, instead of a few Lisp programmers working at it piecemeal. The next years will see profound changes. In short, AI already surrounds us. Is us.

The industrialization of reading, understanding, and question-answering is well underway to be delivered to your personal device. Some of these machines learn statistically; others learn at multiple, asynchronous levels, which resembles human learning. They don't wait around for programmers but are teaching themselves. Understanding the importance of this, many conventional firms like Toyota or General Electric are reinventing themselves as software firms with AI prominent.

Word- and text-understanding programs particularly interest me, partly because I’m a word and text person myself, and partly because words, spoken or written, at the level of complexity humans do them, seem to be one of the few faculties that separate human intelligence from the intelligence of other animals. (Making images is another.) Other animals communicate with each other, of course. But if their communication is deeply symbolic, that symbolism has so far evaded us. Moreover, humans have means to communicate not only face to face, but also across generations and distances, and we do so orally, then by pictorial representations, by speaking, creating pictures, writing, print, and now by electronic texts and videos.

For a long time, we were the only symbol-manipulating creatures on the planet. Now, with smarter and smarter computers, we at last have symbol-manipulating companions. A great conversation has begun that won’t be completed for a long time to come.

Notes

1. In this book, I will not capitalize most program acronyms or abbreviations, except for initial caps. It’s unnecessary and tiring to the reader’s eye.

Look for the next excerpt from This Could Be Important on March 25.

MORE FROM THIS SERIES

About "This Could Be Important"

Pamela McCorduck was present at the creation. As a student working her way through college at Berkeley, she was pulled into a project to type a textbook manuscript for two faculty members in 1960, shortly before she was set to graduate. The authors, Edward Feigenbaum and Julian Feldman, happened to be two of the founding fathers of artificial intelligence. For McCorduck, it was the start of a life-long quest to understand—and document for the rest of us—the key players, underlying ideas, and critical breakthroughs in this transformative technology. 

Part memoir, part social history, and part biography, McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, shares both personal anecdotes of the giants of AI and insightful observations from the perspective of a humanities scholar. Brought to readers by Carnegie Mellon University Press, CHM is thrilled to provide a series of four telling excerpts from the book.

About Pamela McCorduck

Pamela McCorduck is the author of eleven published books, four of them novels, seven of them non-fiction, mainly about aspects of artificial intelligence. She lived for forty years in New York City until family called her back to California where she now lives in the San Francisco Bay Area.

About The Author

CHM Editorial consists of editors, curators, writers, educators, archivists, media producers, researchers, and web designers, looking to bring CHM audiences the best in technology and Museum news.

Join the Discussion

Share

FacebookTwitterCopy Link