On March 31, 2021, CHM hosted New York Times reporter Cade Metz in a virtual event to discuss his new book, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World, with Wall Street Journal science reporter Daniela Hernandez. The heart of the book is the scientists who worked in obscurity for years building computers that could mimic human thought by recognizing patterns in massive amounts of data and teaching themselves how to learn. We may not realize it, but the work of these artificial intelligence (AI) pioneers affects every corner of our daily lives.
Metz was inspired to write the book in 2016 when he witnessed an AI, developed by Deep Mind, beat the world champion of the complex game Go in 37 moves in Seoul, South Korea. He was fascinated by the mixture of excitement and amazement, fear and sadness that transfixed the country and by the people behind neural net technology, in particular Geoff Hinton, the central character in the small world of this AI specialty. Hinton, a fascinating genius, had worked on many of the same ideas as Deep Mind’s Demis Hassabis and his colleagues for decades.
When Hinton received the Turing Award in 2018 for conceptual and engineering breakthroughs that made deep neural networks a critical component of computing, he talked about his wife’s experience with pancreatic cancer and the potential for medical applications for AI. In fact, Hinton had two wives who died of cancer and he himself had such debilitating back problems from a youthful injury that he, literally, could not sit down. Metz makes the poignant case that, not only did Hinton face technological challenges and the skepticism of colleagues, but significant personal obstacles as well. And, like others in his field, Hinton’s experience wrestling with the social and cultural implications of his work shines a light on the very human factors involved with developing machines that do things that even the people who create them can not anticipate.
Metz reminds us that funding for AI research comes heavily from the military, even in Silicon Valley (see "The Valley and The Swamp: Big Government in the History of Silicon Valley"). In essence, many of the games used for research are war games, including chess and Go. They’re used because they are hard, and, Metz speculates, because they appeal to the human drive for competitiveness. This attitude among scientists and engineers may also drive the quest to improve humans. But, when absorbed in the excitement of the technological challenges, it can be easy to sideline the human element, focusing on the positive outcomes and downplaying negative consequences for people. Even relatively simple technologies, like Facebook, have had huge unanticipated side effects (see, "A Facebook Story: Mark Zuckerberg’s Hero’s Journey"). The impact of AI is, and will be, vastly more complex, especially as it becomes even more pervasive.
Metz reminds us that AI cannot (yet) do everything people can do. They are not even close to human-like generalized intelligence. While we have machines that can learn specific skills, like understanding words or recognizing faces in photos, that's very different from reasoning. Even small children can deal with and respond quickly to uncertainty, but AI can't. Recreating a human brain with neural networks is still aspirational. Actually, so is fully understanding how our own brains work. Some hope that AI might help us better understand human intelligence.
In the meantime, what do we do with machines that are learning in ways that we don’t really understand? What exactly is AI teaching itself from the vast quantities of data sucked from the internet, with all it’s biases, hate-speech, and misinformation? What do you do with AI that misidentifies images of Black women as gorillas? How do you get data that doesn’t have those kinds of deep flaws? Companies that use AI, like Google, Amazon, and Facebook, recognize there are fundamental problems, but other forces affect how they address them.
Like people, companies often have a hard time with change. They develop personalities and respond to situations differently based on their own history. Designed to promote themselves and tell the world that what they’re doing is positive, they resist the idea that they cause problems. They often push back—or push out—those who challenge the company narrative. Diversity in the tech industry has always been a problem, Metz points out, but in the field of AI there’s an added level of concern because people are choosing the data AI uses to learn, and their choices are informed by their perspectives and biases. Tech has traditionally operated by putting out a product and patching up problems later, but band-aids after the fact won’t work with AI, where we need to test first to prevent serious repercussions. Is it even possible, however, to develop tools to understand and correct machines that are learning at a scale that humans can not and never will be able to?
Some AI researchers are working to help machines recognize human emotion (see, Reading Your Face: How to Humanize Technology with Emotion AI). As AI becomes more and more sophisticated in the way it relates to people and vice versa, countless social and cultural factors come into play. Metz reminds us that we’ve never been good at predicting how our technology might change us.
What are the consequences of using AI in technology that’s becoming more and more personalized? Relationships with people are a two-way street, with siblings, friends, and therapists challenging your perspective. This is healthy for society. But technology programmed to give people what they want won’t expose us to other points of view or challenge our assumptions. While technology during COVID may have enabled the development of virtual relationships, it’s still important to get out of your house (and comfort zone) and expose yourself to serendipitous human interactions.
What happens when AI comes out of the lab? Once the value of neural net research became apparent, the scientists were snapped up by tech companies at astronomical salaries—Google paid $44 million for Hinton and two colleagues.
Carrying their academic sensibilities into the corporate world, the AI scientists published their research so everyone could use it. But the illusion of leveling the playing field ends there. Metz points out that while more people are willing to call out problems with AI, employees can still be fired for doing so. Only a few very large companies have the big data, the processing power, and the capital to develop AI applications… along with some countries, a few of which are already abusing the tools to target ethnic minorities or spread misinformation.
Today’s technology has a global reach and global actors. The AI story is just beginning, and to keep ahead of its potential for ever-widening circles of harm we all have to be vigilant digital citizens, calling attention to problems as they arise. That includes journalists, Metz says, and he’s holding himself accountable.