Chm Blog Projects

Worry and Wonder: CHM Decodes AI

By CHM Editorial | January 12, 2021

We shape our tools and they in turn shape us.

— paraphrasing Marshall McLuhan

Fact and (Science) Fiction

We live in an age of escalating technological disruption. Forces seemingly beyond our control can give us wonderful benefits, or take them away, and sometimes they can do both at the same time. It is natural to be concerned about changes that affect our daily lives and our society and to ask: Am I part of the change? Does it take me into account? What are the benefits and tradeoffs? Is it worth it—for me, my community, humanity? Such questions are as old as the history of technology.

These days, as artificial intelligence (AI) masters more and more capabilities we regard as uniquely human, it may bring us face-to-face with the kind of "other" portrayed in science fiction for over a century: creations that can increasingly imitate the functions of living minds, and of living bodies (as robots). It's natural to worry, and to hope. 

CHM decodes technology for everyone, providing objective, helpful context for people to become informed technological citizens. CHM will explore the tension between fear and hope of technology and new tech innovations through artificial intelligence. We’ll look at what AI is, how it works, and its impact on our daily lives—past, present, and future. To prepare for digging deeply into how AI intersects with issues like health, work, dating, communications, and privacy, we mined our previous work to provide basic information and resources on the fundamentals of AI. We’ll help you understand the line between fact and (science) fiction. Catch up on your knowledge and see what we have in store. If you think you already have a good grasp of AI, try our KAHOOT! Challenge and join the running to win a prize (quiz must be completed by January 26, 2021).

What exactly is AI?

AI, or artificial intelligence, refers to intelligence exhibited by machines rather than humans. It’s often used to describe when computers mimic human thinking in areas like learning and problem solving. AI is becoming more and more invisible as it operates “under the hood” in many different kinds of applications. AI determines which search results to show us when we search Google, or which videos or posts to suggest on YouTube and Facebook. Facebook and Instagram use AI to cultivate online advertising by offering suggestions for what we should buy next. Digital assistants such as Siri and Amazon Alexa use AI to interact with us. The loved and hated autocomplete feature that suggests words as you text is AI in action. Recently, contact tracing apps use AI to predict the risk of infection, helping prevent the spread of COVID-19. 

AI’s history extends back centuries to a variety of automated devices and “smart” machines that were changing jobs, finance, communication, and the home long before computers. The formal field of AI is widely considered to have started at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. It gathered together scientists and mathematicians for six weeks of workshops and reflected different philosophies and research approaches. But advances have been slower than initial projections. It turns out that some key aspects of human intelligence are not so easy to reproduce in computer code. And not necessarily the ones we would expect–seemingly simple functions like walking, vision, and recognizing objects have been among the hardest for traditional AI to master.

What are they talking about?

Different terms related to AI pop up regularly in all kinds of media, and they can be confusing. Here’s a vocabulary cheat sheet with examples from both the real world and science fiction.

Artificial General Intelligence (AGI), aka Human-level Artificial Intelligence (HLAI), or “strong AI”

This is the AI you see in science fiction and it’s still hypothetical. It refers to a machine that has the ability to understand or learn any intellectual task that a human can. If faced with an unfamiliar situation, for example, the machine could quickly figure out how to respond. This is a scary idea for some people, because given how fast an AGI could access and process data, its capabilities could outstrip humans at a rapid pace. Artificial superintelligence (ASI) would far surpass the most gifted human minds across a wide range of categories and fields.

For example: the computer Hal in the movie 2001: A Space Odyssey; the Terminator in the movies by the same name; the robot Ava in the movie Ex Machina

Artificial Narrow Intelligence or “weak AI”

Artificial narrow intelligence refers to the application of AI to specific, or narrow, tasks or problems. Most of the AI in use today is narrow. Computers are programmed directly or set up to teach themselves (see “machine learning” below) how to do particular tasks or solve specific problems.

For example: self-driving cars; character recognition; facial recognition; speech recognition; ad and product recommendations; game playing (see additional examples below in “Machine Learning” and “Deep Learning and Neural Networks”)

Symbolic AI, or “Good Old Fashioned” AI (GOFAI)

Symbolic AI began in the 1950s, and was the dominant type of AI before machine learning eclipsed it in the 2010s. Symbolic AI systems are preprogrammed to follow “heuristics,” or rules of thumb, that are similar to how humans consciously think about problems. They typically operate by manipulating structures formed from lists of symbols, such as letters or names that represent ideas such as “line” or “triangle.” For example, a “triangle” would be represented by two lists of lines and angles, with the lines and angles themselves being symbols composed of their own parts, such as “point.” Many AI researchers used to believe that such symbolic systems actually modeled how human minds worked. However, once programmed, these systems can’t improve on their own. They can’t learn to get better at their tasks, nor can they learn to do anything beyond what they were programmed to. 

For example: Shakey the robot; automatic theorem provers; IBM’s Deep Blue chess-playing computer

Machine Learning (ML) and Algorithms

Machine learning is a subset of artificial intelligence that uses algorithms, or a set of rules, to “learn” to detect patterns, enabling it to predict several outcomes in order to make decisions. In order for it to learn, an ML system must be trained on lots of examples in the form of data, before it can be applied to real-world problems. In “supervised” learning, humans provide the answer key to the training data by labeling the examples, i.e. “cat” or “dog.” In “unsupervised learning,” the system finds patterns in the data on its own and groups similar things together. In “reinforcement learning,” typically used for game playing, the system learns by playing the game or trying the task thousands of times. Simpler ML systems use statistical techniques to make predictions or classify things. Machine learning systems are narrow AIs because they can only perform tasks for which they’ve been previously trained. They are only as good as the data they’re trained on, and the use of historical data sets can risk baking existing societal biases into these ML systems.

For example: machine translation; targeted advertising; assessment of risk for credit scores, insurance, etc; IBM’s Jeopardy-playing Watson computer

Deep Learning, Neural Networks, and Big Data

Deep learning is a type of machine learning where artificial neural networks, structures inspired by the human brain, learn from large amounts of data, replacing simpler statistical methods. This allows machines to solve complex problems even when they’re using a data set that is very diverse, unstructured, and interconnected. However neural networks require much larger data sets, or “big data,” than other ML techniques to use, for example, in facial recognition and emotion recognition. When companies like Google and Facebook use big data sets that include personal information to help them make predictions about people’s behavior, concerns about privacy can arise. 

For example: facial recognition; self-driving cars; deep fakes; art generation

Singularity

The Singularity is a theory that at some point in time machines will begin to improve themselves, taking off exponentially and leading to machine superintelligence unchecked by human controls, with unknown, possibly dangerous, outcomes for human beings. Futurists offer predictions about what year that may happen.

Should I be worried or hopeful?

Like other technologies throughout history, there are advantages and disadvantages and unintended consequences for AI. There is tension between the benefits and convenience of new technology powered by AI and the risks of potentially exposing personal data and endangering our own freedom and security. An immediate concern many have regarding artificial intelligence is that it might replace workers, leading to high unemployment, yet others believe AI can help us do our jobs better by reducing human error. Some are concerned about the mistakes made by AI that jeopardize safety, such as during autonomous driving, while others think it will protect us from distracted human drivers. 

Check out more CHM resources and learn about upcoming decoding AI events and online materials.

NOTE: Many thanks to Sohie Pal and other CHM teen interns for their thoughtful comments on AI that have been incorporated into this blog.

Image Caption: Boston Dynamics, Legged Squad Support System Robot prototype for DARPA, 2012. Around the size of a horse. Credit: Wikimedia Commons

About The Author

CHM Editorial consists of editors, curators, writers, educators, archivists, media producers, researchers, and web designers, looking to bring CHM audiences the best in technology and Museum news.

Join the Discussion

Share

FacebookTwitterCopy Link