Skip to main content

Verified by Psychology Today

Academic Problems and Skills

Inaccuracies About Health in the Media

Even articles published in mainstream medical journals can be flawed.

Maxx Studio/Shutterstock
Source: Maxx Studio/Shutterstock

We’ve all seen the opposing headlines:

  • Moderate Drinking of Alcohol Is Good for You vs. No Amount of Alcohol Is Safe
  • Marijuana Can Help Improve Mental Health vs. Long-term Marijuana Use Can Be Dangerous
  • Lack of Serotonin causes Depression vs. Serotonin Has Nothing to Do With Depression
  • mRNA Vaccines Have Been Essential for Controlling COVID vs. mRNA Vaccines Cause More Deaths Than They Prevent
  • Colonoscopies Can Help Prevent Death From Colorectal Cancer vs. Colonoscopies May Not Be Effective at Preventing Deaths from Cancer

This is just a small sample of opposing headlines. How is someone supposed to figure out which headline to believe?

The answer turns out to be quite complex. First, reading the article is important because sometimes a headline does not accurately reflect the story. For example, recent headlines about the ineffectiveness of colonoscopies should have more accurately stated, “Recommendations for Colonoscopies Are Unhelpful in Preventing Death When the Recommendations Are Not Followed.” In other words, colonoscopies are very helpful in keeping people alive as long as people have them done.

Even when a headline reflects an article accurately, the science that the article may be reporting might be flawed. In my experience, most medical studies published in major journals have flaws that should put their interpretation in doubt. Unfortunately, unless a reader evaluates the reported scientific studies in a careful and sophisticated fashion, it is impossible to know whether the reported conclusions are valid.

Common reasons for flawed research that lead to apparently contradictory results include poorly designed studies, misinterpretation of results, and overgeneralization of findings. Unfortunately, we also must deal with published studies containing falsified results, such as the 1998 study that falsely purported to link the MMR vaccination with the development of autism.

An Example of a Poorly Designed Hypothetical Study

In a hypothetical study, a drug is tested for its effectiveness in treating an illness. Patients for the study are not screened for the severity of the illness. It is reported that the drug shows no benefit in treating the illness. A closer look at the study shows that most patients had very mild diseases. Thus, the lack of demonstrated significant improvement with the drug could be attributable to having minimal room for improvement.

Another common design flaw occurs when too few participants are included in a study. With a small number of patients, it is less likely that an intervention will be found to be beneficial because statistical analysis requires a large number of patients to prove that an observed difference is not related to chance alone.

Other flaws include when patients select themselves for a study (which means they are not representative of the general population) or when the researchers look for patterns in their data after the study was completed. The problem with the latter strategy is that random, meaningless patterns can usually be identified when data is inspected after collection.

An Example of Misinterpretation of Results

In the same hypothetical study above, the researchers wanted to identify possible risk factors for developing the illness in question. They found that study participants who smoked cigarettes were more prone to developing the illness. The researchers report that smoking may cause the development of the illness, which seems obvious since people know smoking can be bad for you.

A more sophisticated analysis would suggest that a common variable (such as a stressful life) might have caused patients to smoke as well as to be more prone to developing the illness. Keeping in mind the adage that correlation does not mean causation allows us to conclude more appropriately that the association of smoking with the illness does not mean that smoking caused the illness.

A recent example of possible misinterpretation of results involved a study of more than 2000 children in which it was found that those who played videogames for 21 hours a week performed better on cognitive skill tests that measured impulse control and working memory as compared with children who never played video games (Chaaran, 2022). The implication in the media was that playing video games can help improve cognitive function.

However, other explanations for the association between gaming and better cognitive function include that children with better brain function were likelier to play video games. Or families who did not give their children access to video games were more likely to be impoverished and thus less apt to provide their children with environments that promoted cognitive development.

An Example of Overgeneralization of Study Results

The same drug, as tested in the hypothetical study above, is tested in another hypothetical study. In order to qualify for this study, patients had to be very ill. It is reported that the drug works wonders for the illness. However, it turns out that most patients with this illness in the general population have a mild form and, therefore, should not be prescribed the drug as it would not benefit them much.

Similar kinds of generalization errors occur when the results of a study in a particular population, e.g., college students, are applied to the general population, which includes children and the elderly.

An Example of a Proof That Could Stump a Non-Expert

With many studies, a non-expert could properly assess the validity of certain scientific studies, but such a person may need to have developed mastery of the relevant scientific field in order to make a proper assessment.

For example, here is a "proof" that 2 + 2 = 5. Can you spot the flaw?

  1. Since 0 = 0, we can also say: 4 – 4 = 10 – 10
  2. We manipulate both sides of the equation without changing the equality: 22 – 22 = (2 x 5) – (2 x 5)
  3. We further manipulate the equation by introducing (2 – 2) on both sides without changing the equality: (2 + 2) x (2 - 2) = 5 x (2 - 2)
  4. Algebra permits performing the same mathematical operation on both sides of an equation. Once we divide both sides of our equation by (2 - 2), you will find that 2 + 2 = 5

Some people immediately spot the flaw in this “proof,” while others struggle, as they do not know or recall a basic math rule. Because people believe that 2 + 2 cannot equal 5, they may state that the flaw in this proof lies in one of the first three steps. But that is wrong. The math rule that must be recalled is that you cannot divide by 0, and this is what you would be doing in step 4 if you divide each side by (2 - 2), which is 0.

The large number of people who struggle to figure out the flaw in this simple example illustrates how tricky it can be to find flaws in published scientific studies.

Takeaway

Since the casual reader of news articles and studies cannot assess the validity of the reported results and conclusions, I suggest that significant changes in your life should only be undertaken following consultation with an expert in the field who has fully evaluated the published literature.

References

Chaaran, Bader, et al. 2022. “Association of Video Gaming With Cognitive Performance Among Children.” JAMA Netw Open. 2022;5(10):e2235721.

advertisement
More from Ran D. Anbar M.D.
More from Psychology Today
More from Ran D. Anbar M.D.
More from Psychology Today