- Models are a way to abstract away parts of a complex system in order to more easily understand it.
- The best predictive models are mathematical ones, making quantitative skills critical for science.
- New models might seek to bridge the gap between complex scientific models and simple lay explanations.
In common parlance, a model is something you build, play with, look at, and relate to something bigger and more complicated. Or maybe it’s a person whose job is to show off a product or idea. Both are useful in their own ways: a model airplane has some of the important features of a full-sized airplane, but it’s much easier (and cheaper) to carry around. Along different lines, a clothing model helps you see how something fits, looks, moves, and serves to showcase purchasable products to some audience. The different uses of the word might seem like a model is many things, but at its heart, a model is just a simplification or example of something else and is typically meant to make that “something else” easier to understand.
Scientific models have come into the limelight over the past few years, especially on hot-button issues like climate change, hurricanes, or COVID. There remains an air of mystery about them, in part because most people are unable to understand how they work, why we use them, or what exactly goes into creating and using them. If models are meant to simplify the world, then why aren’t they, well, simple?
The short answer is that models are a bridge between the things we want to understand and our ability to comprehend them. They let us take enormously complicated things and break them up into different parameters or facets that we can manipulate and play with in order to figure out how the complicated thing works. Especially when we work with systems that are interconnected and difficult to reduce to independent, constituent parts, models allow us to at least take a look at one piece at a time.
Models can be as straightforward as a really good metaphor. For example, human memory has at times been “modeled” as a tape recorder, a subway map, an aviary (Plato), a purse, a computer program, a muscle, and even a cow’s belly. Each metaphor captures some essential properties of the thing it seeks to elucidate (memory), leaving the differences aside so that the thought can be entertained long enough to grant some new insight. No metaphor really captures the entirety of human memory, but they work well enough to help us learn something about how the mind could work.
The problem is that these sorts of metaphor models only get us so far. To start to really predict things, we need to start doing math. As much as many of us might hate it, math is far and away the best tool we have for deriving predictions, and it lets us communicate clearly with one another and with some of our most important partners in the scientific enterprise – our computers. Your computer does all of its modeling using the binary language of 1s and 0s, which means that any problem real-world problem you want its help with needs to be abstracted into a math problem.
Invariably, this means making some simplifying assumptions or ignoring some properties of the thing you want to model. Just as a model airplane can’t have fully functioning jets or flight crew, our models of things like the brain or the weather have to ignore some parts of the thing we want to understand.
Here's where the trouble comes in – ignoring or assuming away parts of the thing we want to understand makes a model simpler, but it makes it less faithful to the thing you want to model. Making invalid assumptions or simplifications introduces bias into a model. The simpler it gets, the more bias there is, and the worse the model is at predicting the behavior of the real system it’s trying to approximate. As a result, a model that’s easy to understand is often inaccurate. Scientists have to trade off between simplicity and bias, or having a model that’s so complex it’s impossible to work with (or can predict anything, even the wrong answers!).
When talking to one another, we can use models that are incredibly complex and eliminate many of the assumptions that go into the model. But when a scientist is trying to communicate something – like why someone should evacuate for a hurricane, or why they should wear a mask during COVID outbreaks – it’s difficult to translate the complex model into something that’s simple enough for everybody to understand. That’s why building trust in science is so important – sometimes we can’t explain all of the details to you of why the model is predicting something (why do I need to use less water? Or get vaccinated?).
At a time when trust in science and scientists seems in jeopardy, it’s important to think about this balance. Our science is more advanced than ever, but that comes with the challenge of translating between complex models (for scientists) and simple models (for everyone else). The better we can come up with different models to bridge the gap between the difficult things we seek to understand and the people who are invested in understanding them – or at least, people who will be affected by things like the weather, or their brains, or diseases – the stronger we will be. That places models among the most important things we can develop. And hopefully, a deeper understanding of them will make science easier to support and communicate.
Roediger, H. L. (1980). Memory metaphors in cognitive psychology. Memory & Cognition, 8, 231-246.
Hintzman, D. L. (1974). Psychology and the cow's belly. The Worm Runner's Digest, 16, 84-85.