Whether you think artificial intelligence is dangerous, the greatest tech advance ever, or something in between, one thing is sure: AI will affect your experiences and permanently change your relationship with reality. The good news? We can still decide how.
On November 19, 2021, Eric Schmidt, former Google CEO and chairman, and Daniel Huttenlocher, inaugural dean of the MIT Schwarzman College of Computing, joined CHM President and CEO Dan’l Lewin for a CHM Live event to discuss how AI will transform our world. They shared insights from their new book, The Age of AI and Our Human Future, coauthored with legendary diplomat Henry Kissinger.
Schmidt met Kissinger when the former secretary of state visited Google in 2008, concerned the company was a threat to civilization. Kissinger worried that technologists’ work would have enormous social implications, but they were uneducated in history, philosophy, and economics. Huttenlocher met Kissinger when he was the dean of Cornell Tech. Discovering their mutual interest in investigating the implications of AI, the three began to talk, and after four years of collaborative learning, they decided to share it all in a book.
The authors explore fundamental questions facing humanity in a world where we engage with machines that have capabilities beyond anything we’ve ever seen. As Lewin noted, “History is the primary guidepost to look at the future.” Understanding how to wield the tools of the day and pointing them in the right direction with the right oversight has been critical in every era.
Schmidt explains how the tech of today differs from the computers of the past and how decisions about how consumer-focused AI is used must be made in the context of ethics, morals, and history, rather than business and engineering.
What will we do in the future with AI recommendations that seem counter to morality and ethics, especially when we don’t understand how it arrives at those recommendations? asks Schmidt. Do we just accept it and treat it like a god?
In the Age of Enlightenment, humans moved from understanding the world through faith to understanding with both faith and reason, Huttenlocher explained. That shift took a century. Today, AI is already shaping everything we do, particularly in social media. Now we understand the world not just through human reason and faith but also through non-human intelligence that shapes our daily experience. It presents the world to us in ways that are foreign, and we need a new type of understanding fast—one that incorporates an underlying philosophy and knowledge of history.
If a computer can analyze something, says Schmidt, it can generate something new: human-inspired intelligence that is not driven by humans. So, what will we build it to do? To maximize rage for profits, like Facebook’s model, or to pursue scientific discovery? Humans must feed the machines what we want them to learn. And, says Huttenlocher, we have to audit them.
“It’s sort of like a teenager, they can’t explain why they did something,” Schmidt says about AI. It’s imprecise, dynamic, and evolving, and when you combine it with other technologies you can get unexpected behaviors that can be scary. It’s busy learning while you’re learning it, and it can learn a lot in just one day.
The computers taught to play Alpha Go, and the chess version of Alpha Go, were impressive at beating humans, but what was even more extraordinary was that they came up with new ways of doing it after people have been playing these games for 2000 years. That was unprecedented.
AI today is not just processing data at a level much higher than humans can, it is even making discoveries, including a new drug called halicin, designed to combat the problem of antibiotic resistance. AI is also being used to improve radiological techniques, improve energy efficiency, achieve better learning outcomes, and make war safer for civilians.
In the above quote, is GPT-3 demonstrating some understanding of what (who?) it is? What are the implications of machines like these for our political systems? China already has a national strategy for dealing with where AI is headed—an algorithmic, non-democratic way to regulate these systems. They may be figuring out the excesses and implications far ahead of democracies like the US. Schmidt believes we need more funding, training, education, immigration, and talented people recruited into government in order to ensure that global platforms reflect our values.
Unfortunately, politicians have no idea how to deal with the issue. And, we don’t have time to develop a philosophical understanding of the systems we have today, says Huttenlocher. If you try to legislate without a common understanding, whatever results will not have legitimacy, just like America’s experiment with Prohibition in the 1920s. But today, there is no cultural agreement.
Human society does need rules. As the digital world becomes more and more seductive, people will spend more time there and then what happens when we’re all living in our own private digital worlds? Will we become even more disconnected from other humans? What are the implications for nations?
Learning from history, Huttenlocher notes that technological developments invalidated both international relations and military assumptions during World War I. Schmidt is concerned about the lessons of the Cold War. If someone developed a nuclear weapon they bragged about it, and countries then negotiated to contain it. But software is different. Our adversaries won’t tell us about tech that could kill us because then we could build it, too. There would be no negotiation or containment and proliferation would be a natural defensive strategy. In the future, artificial general intelligence systems must be controlled by governments, but what if governments are bad? Human conflict will happen much faster because of these tools, but we can’t have AI making decisions about military activity.
Solving these issues internationally is a multi-year process, and we must get started on these conversations now. We need agreements before, not after, AI systems are deployed. To combine and paraphrase two comments from the speakers: AI can amplify the good in the world, and it can amplify the bad.
So, it’s up to humanity. And human history is messy.