What Does AI Teach Us About Human Intelligence?
What does modern Reinforcement Learning suggest about human intelligence?
Posted January 31, 2022 | Reviewed by Devon Frye
- For modern Artificial Intelligence, training a robot or drone to navigate the world is harder than winning at chess.
- Human children can solve basic movement and action tasks better than advanced AI, suggesting we have different kinds of intelligence.
- The kind of intelligence humans have is related to sensing the world, choosing actions, and then sensing again, in a loop connected to nature.
- Human intelligence tests should account for this kind of skill at sensing-doing loops by considering how well we adapt to changes around us.
The story of Artificial Intelligence (AI) research in the U.S. is one of competing visions for what it means to be intelligent. When a normal person thinks about what it means to be intelligent, they typically think of being able to solve specific types of difficult tasks. Can a computer play chess or Go like an expert? Can a computer give the answers to complicated questions that an expert would give? These are certainly interesting and difficult tasks, but they prioritize what is typically hard for a human to do. Yet for a long time, computers have struggled to perform many tasks that humans find so simple they take them for granted.
Modern research on drone navigation has struggled with getting a drone to follow a path in the woods on its own. Getting the drone to successfully avoid trees and follow a woodland trail was a cutting edge task in a 2015 manuscript, and the researchers were able to solve it by collecting thousands of hours of video from people hiking through a trail, and a three-camera set-up that helped the drone learn to correct itself when veering off course. The final model that was able to fly down a path had millions of parameters—millions of small bits of information that needed to be learned to get the drone to learn to fly properly. We can teach computers to beat the best players in the world at the most complex strategy games in the world, but it is still an active challenge to get computers to take a walk in the woods.
Navigation, it turns out, is a hard problem for AI, and the way it is currently approached challenges our intuition of what intelligence is. Intelligence isn’t just the ability to interact with complex, abstract systems, like doing math or playing a board game. It’s also the ability to respond adaptively to complex, incoming information from your senses. Intelligence is being able to see something—say an image from a trailhead—and know how to identify the important parts of it (this is a tree, this is the trail, this is a log), and then knowing how to act on this information. There’s a log in the middle of the trail? You can just go over it.
This is part of what psychologists call a perception-action loop. You go from seeing to doing, and that leads to seeing new things, which means you need to do new things.
Modern Reinforcement Learning (RL) is a way of setting up this type of problem so a computer can learn from experience how to interact with the environment. It typically involves deep learning on unstructured data, as described in this post. Hand coding in rules for what to do next based on everything the drone could see would be a huge task, because there are just so many possible inputs the drone could see. Every change in the position of a leaf, every shifting angle on a tree, gives a different input image.
So the way researchers approach this now is by using RL to learn what to pay attention to and what actions to take in order to navigate. This process involves setting up a general learning algorithm, feeding it data, and letting it learn on its own. There is no expert spelling out all the best rules for walking in a forest.
This setup suggests that intelligent behavior is much more about interacting with our immediate surroundings, being able to keep track of what is going on around us, and being able to respond as things change. It suggests that coordination between body and mind—that loop from seeing to doing—is actually at the heart of what it means to be intelligent. The tasks that we typically think of smart people as doing are actually easy compared to the tasks that most of us can do easily by the time we reach childhood. From the perspective of designing a robot intelligence, these are deep challenges. Much of human genius is not at the level of explaining things, it is at the level of doing things.
Trying to build intelligent systems helps us refine our understanding of what intelligence really is. In this case, it suggests several insights about what intelligence really is, and potentially how to measure it.
- Intelligence can be about action, not just explanation. Intelligence tests of the future might therefore measure not just how well you can answer an abstract question, but whether you can keep doing a task successfully after something about it changes.
- Intelligence involves skill at seeing. Expert vision involves knowing what to look for—how to pick out the relevant information in a scene. For example, chess masters were able to memorize board positions from real games much more quickly than non-experts. One way to measure intelligence, then, is to try to understand what an individual sees.
- Intelligence takes advantage of the seeing-acting (perception-action) loop to gather information and explore. When you don’t know exactly what to do, you can take actions that actively help you resolve the confusion (e.g., Is that a snake or a stick? Better find a cautious way to get a closer look). Measuring intelligence should therefore look not just at what you know, but what you try to find out.
Examining people is the traditional way of learning about psychology. But comparing the psychology of other intelligences—like animals and, now, computers—can give us new and surprising insights into the mind.
Giusti, A., Guzzi, J., Cireşan, D. C., He, F. L., Rodríguez, J. P., Fontana, F., ... & Gambardella, L. M. (2015). A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters, 1(2), 661-667.