Artificial intelligence dominates the headlines, and between all the hype and doomsaying, it’s hard to know what to think. AI expert and Stanford Professor Fei-Fei Li was at CHM Live to share her experience and insights from more than two decades at the forefront of the field. Tech policy guru and current CEO of Renaissance Philanthropy Tom Kalil moderated the discussion, which was made possible by the generous support of the Patrick J. McGovern Foundation.
Li centered the conversation in the historical timeline of artificial intelligence, beginning with the 1956 Dartmouth summer conference, where the term was coined. There have been many hype cycles since then, with magazine covers in the ‘70s, for example, talking about the robot takeover.
But by the 1990s, a quiet revolution was happening within the field, driven by statistical modeling that combined with computer programming became machine learning. It cracked open fields like natural language processing, computer vision, speech recognition, and more.
When Li was a PhD student at Caltech in the early 2000s, two things happened that defined her generation of AI researchers: statistical machine learning and the internet. Li’s ImageNet project took advantage of these developments—and the rest is (recent) history.
By making the massive ImageNet dataset open source, Li and her colleagues jumpstarted AI research. At the time, people were working with small amounts of data, and this paradigm shift to scratch all that and drive high-capacity models and generalization in AI was a radical idea—and one that her senior colleagues viewed with skepticism.
But her persistence and passion paid off. Doors began opening to pioneers and tech companies and since around 2010 or 2012, Li believes we have been in the modern era rebirth of AI. In January 2016, when an AI beat a human at the complex game Go, the public became aware that machines were powerful enough to challenge humans in tasks we think are uniquely human.
Around the same time that a backlash against tech developed after the 2016 election and the Cambridge Analytica scandal, more investments were being made in AI by big tech companies and entrepreneurs. And in society, we began to have conversations about machine learning bias and other concerns.
Then, in October 2022, ChatGPT came on the scene. It was an awakening moment for all, as anyone could have AI at their fingertips. Suddenly, people could see that AI has so much ability to learn patterns and to make predictions. It also has some level of reasoning—it can explain “why.”
Li believes that the next three to five years will be exciting ones as AI begins to be used in a variety of fields to solve problems. But it will also be a time of tension for society.
As an educator, Li tries to be an honest communicator with the public and she is careful not to hype AI. But she is personally excited by the potential for AI to develop spatial intelligence—the ability to understand and interact with a 3D world, both physically and digitally.
So, is spatial intelligence the beginning of artificial general intelligence—the dream of the early AI pioneers to develop computers that could literally think like a human? From a scientific perspective, Li sees this ambition as a North Star but does not think that it can be accomplished for the holistic definition of human.
Li believes the work of Stanford’s Institute for Human-Centered AI provides a critical framework in which to think about this technology and how it will impact people’s lives.
AI has deep implications for humanity—both positive and negative. On the positive side, Li believes that ambient intelligence in a health care setting can use tech to help provide nurses, doctors, and caretakers with another set of eyes and ears to ensure patient well-being. In education, AI can personalize learning for students. It is even being used in agriculture to detect weeds. There are countless positive use cases that can be explored when experts seek to harness AI to their particular fields.
Despite her enthusiasm for the multi- and inter-disciplinary potential of combining AI with various sectors, Li is fully aware that the technology has many risks. The biggest in her mind? Ignorance.
Anyone who says AI is all good is ignorant of the past, says Li. Every tool in human history has been used in harmful ways. But speaking hyperbolically about existential harms ignores the fact that AI is a physical system that exists in data centers and is tethered to human society.
It is up to us to create and maintain a healthy AI ecosystem. That includes academia, entrepreneurship, both big tech and little tech, and particularly public investment. Because the best outcome of public sector investment is not the technology that it helps to develop but the people.
Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.