Skip to main content

Verified by Psychology Today


Why Safety Campaigns Can Lead to Dangerous Outcomes

Armchair psychology often undermines intended messages.

Key points

  • Armchair psychology often undermines safety campaigns and other marketing messages.
  • The problem stems from psychological phenomena such as "the boomerang effect."
  • To overcome these problems requires using best practices from behavioral science.

Imagine you’re driving along the highway, and see an electric sign saying “79 traffic deaths this year” as part of a safety campaign to reduce traffic fatalities. Would this make you less likely to crash your car shortly after seeing the sign? Perhaps you think it would have no effect?

Neither is true. According to a recent peer-reviewed study that just came out in Science, one of the world’s top academic journals, you would be more likely to crash, not less. Talk about unintended consequences!

The study examined seven years of data from 880 electric highway signs in Texas, which showed the number of deaths so far this year for one week each month as part of a safety campaign. The researchers found that the number of crashes increased by 1.52 percent within three miles of the signs on these safety campaign weeks compared to the other weeks of the month, when the signs did not show fatality information.

That’s about the same impact as raising the speed limit by four miles or decreasing the number of highway troopers by 10 percent. The scientists calculated that the social costs of such fatality messages amount to $377 million per year, with 2,600 additional crashes and 16 deaths.

That’s just for one year in Texas. Unfortunately, more than half of all U.S. states run similar safety campaigns.

The cause of the crashes? Distracted driving. These “in-your-face” messages, the study finds, grab your attention and undermine your driving. In other words, the same reason you shouldn’t text and drive.

What Does Science Say About Misguided Safety Campaigns?

Supporting their hypothesis, the scientists discovered that the increase in crashes is higher when the reported deaths are higher. Thus, later in the year as the number of reported deaths on the sign goes up, so does the percentage of crashes.

And it’s not the weather: the effect of showing the fatality messages decreased by 11 percent between January and February, as the displayed number of deaths resets for the year. They also uncovered that the increase in crashes is largest in more complex road segments, which require more focus from the driver.

Their research also aligns with other studies. One proved that increasing people’s anxiety causes them to drive worse. Another showed drivers fatality messages in a laboratory setting and determined that doing so increased cognitive load, making them distracted drivers.

If the authorities actually paid attention to cognitive science research, they would never have launched these fatality message advertisements. Instead, they relied on armchair psychology and followed their gut intuitions on what should work, rather than measuring what does work. The result was what scholars call a "boomerang effect," meaning when an intervention produces an effect opposite to that intended.

The Failure of the Government's Anti-Drug Safety Campaign

Unfortunately, such boomerang effects happen regularly. Consider another safety campaign, the National Youth Anti-Drug Media Campaign that occurred between 1998 and 2004, which the U.S. Congress funded to the tune of $1 billion.

Using professional advertising and public relations firms, the campaign created comprehensive marketing efforts that targeted youths aged 9 to 18 with anti-drug messaging, focusing on marijuana.

The messages were spread by television, radio, websites, magazines, movie theaters, and other venues, and through partnerships with civic, professional, and community groups, with the intention for youths to see two to three ads per week.

A 2008 National Institutes of Health-funded study found that, indeed, youths did get exposed to two or three ads per week. However, on the whole, more exposure to advertising from the campaign led many youths to be more likely to use marijuana, not less!

Why? The authors found evidence that youths who saw the ads got the impression that their peers used marijuana widely. As a result, the youths became more likely to use marijuana themselves.

Indeed, the study found that those youths who saw more ads had a stronger belief that other youths used marijuana, and this belief made starting to use marijuana more likely. Talk about a boomerang effect.

How Can We Make Safety Campaigns Better?

Unfortunately, such reliance on best practices and measurements of interventions is done too rarely. Fatality signage campaigns have been in place for many years without assessment. The federal government ran the anti-drug campaign from 1998 to 2004 until finally, the measurement study came out in 2008.

Instead, what the authorities need to do is consult with cognitive and behavioral science experts on nudges before they start their interventions. And what the experts will tell you is that it’s critical to evaluate in small-scale experiments the impact of proposed nudges. That’s because, while extensive research shows nudges do work, only 62 percent have a statistically significant impact, and up to 15 percent of desired interventions may backfire.

Nonetheless, Texas, along with at least 28 other states, has pursued mortality messaging campaigns for years, without testing them effectively with behavioral scientists. Behavioral science is critical here: when road signs are tested by those without expertise in how our minds work, the results are often counterproductive.

For example, a group of engineers at Virginia Tech did a study of road signs that used humor, popular culture, sports, and other nontraditional themes with the goal of provoking an emotional response. They measured the neuro-cognitive response of participants who read the signs and found that “messages with humor, and messages that use wordplay and rhyme elicit significantly higher levels of cognitive activation in the brain… an increase in cognitive activation is a proxy for increased attention.”

The researchers decided that because the drivers paid more attention, therefore the signs worked. Guess what? By that definition, the fatality signs worked, too. They worked to cause drivers to pay attention to the fatality numbers, and therefore be distracted from the road. That’s an example of how not to do a study. The goal of testing road signs should be the consequent number of crashes, not whether someone is emotionally aroused and cognitively loaded by the sign.

But there is good news. First, it’s very doable to run an effective small-scale study testing an intervention in most cases. States could set up a safety campaign with 100 electric signs in a diversity of settings and evaluate the impact over three months on driver crashes after seeing the signs.

Policymakers could ask researchers to track the data as they run ads for a few months in a variety of nationally representative markets for a few months and assess their effectiveness.

And if you’re not a policymaker? You can write to and call your elected officials and ask them to make this kind of research a priority before embracing an untested safety campaign, and more broadly encourage them to avoid relying on armchair psychology and test their intuitions before deploying initiatives that might place the public under threat.


Studies have shown that safety campaigns can lead to dangerous outcomes due to armchair psychology leading to the boomerang effect. Indeed, boomerang effects frequently undercut marketing and other messages designed to influence people. In order to influence people successfully, leaders need to apply best practices from behavioral science and measure the impact of their interventions.


Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge university press.

Soll, J. B., Milkman, K. L., & Payne, J. W. (2015). A user's guide to debiasing. The Wiley Blackwell handbook of judgment and decision making, 2, 924-951.

Tsipursky, G. (2020). Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters. Wayne, NJ: Career Press.

Keren, G. (1990). Cognitive aids and debiasing methods: can cognitive pills cure cognitive ills?. In Advances in psychology (Vol. 68, pp. 523-552). North-Holland.

Cantarelli, P., Bellé, N., & Belardinelli, P. (2020). Behavioral public HR: Experimental evidence on cognitive biases and debiasing interventions. Review of Public Personnel Administration, 40(1), 56-81.

Haselton, M. G., Nettle, D., & Andrews, P. W. (2015). The evolution of cognitive bias. The handbook of evolutionary psychology, 724-746.

Marshall, J. A., Trimmer, P. C., Houston, A. I., & McNamara, J. M. (2013). On evolutionary explanations of cognitive biases. Trends in ecology & evolution, 28(8), 469-473.

Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D. A., Galperin, A., Frankenhuis, W. E., & Moore, T. (2009). Adaptive rationality: An evolutionary perspective on cognitive bias. Social Cognition, 27(5), 733-763.

More from Gleb Tsipursky Ph.D.
More from Psychology Today
More from Gleb Tsipursky Ph.D.
More from Psychology Today