Chm Blog Behind the Scenes

The Promise of the Doctor Program: Early AI At Stanford

By CHM Editorial | March 25, 2020

Editor’s Note: This excerpt is the second in a four-part series from Pamela McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia. All excerpts are shared with permission from the author.

Pamela McCorduck was AI pioneer Ed Feigenbaum’s assistant in 1965 when she had an epiphany that changed her life. World-famous Russian computer scientist Andrei Yershov visited Stanford and wanted to see the Doctor program, one of the earliest interactive computer programs. In her 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, McCorduck describes what happened when Yershov sat down in front of the teletype machine that would allow him to communicate with the computer by text. 

Yershov responded to the computer’s opening pleasantries by writing that he was tired from traveling and being away from home. The computer wrote back: “Tell me about your family.” While McCorduck and others watched, Yershov confided to the machine his intimate worries about his wife and children. Witnessing how the Doctor program evoked such an emotional response from a computer scientist even though he knew it was just a machine, McCorduck realized that something important was happening: there had been a connection between two minds, one human and one artificial.

In the excerpt below, McCorduck describes how the early Doctor program illustrates many issues that still surround artificial intelligence. There’s the dream of harnessing AI for a better future, concerns about ethics at the intersection of AI and human behavior, and the clash of personalities and perspectives in a new field with both unprecedented power and unknown risk. 

Today, medical chatbots, descendants of the Doctor program, are used to monitor patients, answer questions, and coach people to take their medications. Research has shown that people even reveal more about their health conditions, and their lapses in following doctors’ orders, to computers than they do to the doctors themselves because they don’t feel judged. McCorduck takes us back to the past where the future began.

bar divider

At Stanford, I was learning by osmosis again, the way I’d learned from the graduate students at Berkeley. I was mainly learning about AI, deeply important at Stanford, which, along with Carnegie Mellon and MIT, was then one of the three great world centers of AI research. All three were undisputed world centers of computing research generally, and it’s no coincidence that AI was centrally embedded in that wider, pioneering research.

Ed Feigenbaum had come to Stanford hoping that he and John McCarthy could collaborate. They remained personally friendly but realized their destiny was to pursue different paths in AI research. When I arrived at Stanford, McCarthy was in the process of moving his research team to a handsome, low-slung semicircle of a new industrial building in the Stanford hills, perhaps five miles from Polya Hall. A now defunct firm called General Telephone and Electric, seeing the new structure didn’t fit their research plans after all, had given it to Stanford, and it became the Stanford Artificial Intelligence Laboratory, SAIL.1

Among the research projects that had moved from Polya Hall to SAIL was Kenneth Colby’s Doctor program. Colby was an MD and psychiatrist who thought there must be some way to improve the therapeutic process—perhaps by automating it. Patients in state psychiatric hospitals might see a therapist maybe once a month if they were lucky. If instead they could interact with an artificial therapist anytime they wanted, then whatever its drawbacks, Colby argued, it was better than the current situation. In those prepsychotropic drug days, Colby wasn’t alone in thinking so. Similar work was underway at Massachusetts General Hospital. Colby had collaborated for a while with Joseph Weizenbaum, an experienced programmer, who’d come from a major role in automating the Bank of America and was interested in experimenting with Lisp. Weizenbaum would soon create a dialect of Lisp called Slip, for Symbolic Lisp, though it really had no symbolic aspirations as AI understood the term.

Doctor was the program that the visiting and eminent Soviet scientist, Andrei Yershov, asked to see. His encounter with Doctor was the moment that artificial intelligence suddenly became something deeper and richer for me than just an interesting, even amusing, abstraction.

But Doctor raised questions. Should the machine take on this therapeutic role, even if the alternative was no help at all? The question and those that flowed from it deserved to be taken seriously. Arguments for and against were fierce. Weizenbaum warned that the therapeutic transaction was one area where a machine must not intrude, but Colby said machine-based therapy was surely preferable to no help at all.

Thus Doctor was the beginning of a bitter academic feud between Weizenbaum and Colby, which I would later be drawn into when I published "Machines Who Think" and made for myself a determined enemy in Weizenbaum.

At the time Yershov was playing (or not) with the Doctor demonstration, Weizenbaum was already beginning to claim that Colby had ripped him off—Doctor, he charged, was just a version of Weizenbaum’s own question-answering program, called Eliza (after Eliza Doolittle). Eliza was meant to simulate, or caricature, a Rogerian therapist, which simply turned any patient’s statement into a question. I’m feeling sad today. Why are you feeling sad today? I don’t know exactly. You don’t know exactly? Feigenbaum, who’d taught Lisp to Weizenbaum, says that Eliza had no AI aspirations and was no more than a programming experiment.

Colby objected strenuously to Weizenbaum’s charges. Yes, they’d collaborated for a brief period, but putting real, if primitive, psychiatric skills into Doctor was Colby’s original contribution and justified the new name. Furthermore, Colby was trying to make this a practical venture whereas Weizenbaum had made no improvements in his toy program.

Maybe because Weizenbaum seemed to get no traction with his claims of being ripped off, he turned to moralizing. Even if Colby could make it work, Doctor was a repulsive idea, Weizenbaum said. Humans, not machines, should be listening to the troubles of other humans. That, Colby argued, was exactly his point. Nobody was available to listen to people in mental anguish. Should they therefore be left in anguish?2

I agreed with Colby. Before this, I might not, and based only on first feelings, have sided with Weizenbaum.

But at Stanford, I was learning to think differently. One day, I tried to explain to Feigenbaum how I’d always groped my way fuzzily, instinctively into issues, relying on feelings. Now I began to think them through logically. Ed laughed. “Welcome to analytic thinking.”

I’d entered university hoping to learn “the best which has been thought and said in the world,” as I read in Matthew Arnold’s "Culture and Anarchy" in my eager freshman year. But Arnold said more: the purpose of that knowledge was to turn a stream of fresh and free thought upon our stock notions and habits.

For me, meeting artificial intelligence did exactly that.

Notes

  1. Read an autobiography of SAIL at http://infolab.stanford.edu/pub/voy/museum/pictures/AIlab/SailFarewell.html
  2. By 2018, online therapy was thriving. One project, a joint effort between Stanford psychologists and computer scientists, and called Woebot, offered cheap but not free therapy to combat depression. It was a hybrid—one part interaction with a computer, and one part interaction with human therapists, this for people who couldn’t afford the high cost of conventional therapy. Earlier projects included one at the Institute for Creative Technologies in Los Angeles, called Ellie, to assist former soldiers with PTSD. Ellie’s elaborate protocols seem to have overcome the problem that many patients resist telling the truth to a human therapist but feel freer with a computer. (We saw this with Soviet computer scientist Andrei Yershov.) Some decades ago, the Kaiser Foundation discovered the same reaction to ordinary medical questions—people felt judged by human doctors in ways they didn’t by computers and could thus be more candid. 

Look for the next excerpt from This Could Be Important on April 8.

More from This Series

About "This Could Be Important"

Pamela McCorduck was present at the creation. As a student working her way through college at Berkeley, she was pulled into a project to type a textbook manuscript for two faculty members in 1960, shortly before she was set to graduate. The authors, Edward Feigenbaum and Julian Feldman, happened to be two of the founding fathers of artificial intelligence. For McCorduck, it was the start of a life-long quest to understand—and document for the rest of us—the key players, underlying ideas, and critical breakthroughs in this transformative technology. 

Part memoir, part social history, and part biography, McCorduck’s 2019 book, This Could Be Important: My Life and Times with the Artificial Intelligentsia, shares both personal anecdotes of the giants of AI and insightful observations from the perspective of a humanities scholar. Brought to readers by Carnegie Mellon University Press, CHM is thrilled to provide a series of four telling excerpts from the book.

About Pamela McCorduck

Pamela McCorduck is the author of eleven published books, four of them novels, seven of them non-fiction, mainly about aspects of artificial intelligence. She lived for forty years in New York City until family called her back to California where she now lives in the San Francisco Bay Area.

About The Author

CHM Editorial consists of editors, curators, writers, educators, archivists, media producers, researchers, and web designers, looking to bring CHM audiences the best in technology and Museum news.

Join the Discussion

Share

FacebookTwitterCopy Link