- In 2019 Instagram added what it calls “sensitive content screens.”
- The authors of new research conducted two studies to test how Instagram users react to sensitive content screens.
- Participants preferred to uncover blurred-out sensitive content, with 80% falling on the "uncover” side of the response scale.
- Over half of the participants said they would get rid of these sensitive content screens altogether if they could.
In response to concerns that some of its content was putting young people at risk for increased self-harm, in 2019, Instagram described major changes to its content moderation. Specifically, Instagram announced that it would no longer allow any “graphic images of self-harm” (like cutting). Non-graphic self-harm content (like healed scars) would still be allowed, but Instagram would no longer show this content via searches, hashtags, explore functions, or recommendations. Instagram also added what it calls “sensitive content screens.”
These screens make images blurry and show a warning: “Sensitive Content: This photo may contain graphic or violent content.” The user can then choose to reveal the content if they wish. Though Instagram argued that these screens were designed to protect the well-being of vulnerable users, a new pair of studies suggests the screens do little to discourage exposure to graphic or disturbing content, perhaps because they make users curious to see what the screens are hiding.
Adding sensitive content screens can ostensibly protect users from being surprised by disturbing content as they scroll through their feeds. But the authors of this new research, published in Clinical Psychological Science, noted that no empirical research has actually tested whether these screens have benefits for those who might be more negatively affected by graphic or disturbing content.
There are two main reasons to think that sensitive-content screens are unlikely to be effective in reducing exposure to disturbing content. The first is called the “forbidden fruit effect.” When you feel your freedom to access something is taken away or restricted, it often makes you want that thing more. Basically, our response to being told we can’t have something is often to insist that we must have it. If sensitive-content screens make us feel like we’re being restricted from certain types of social media content, that could actually increase interest in that content.
Social media users may also be vulnerable to what researchers call the “Pandora effect.” When we have uncertainty about what we’re likely to encounter, we often try even harder to encounter it – even if we suspect it might be negative. The Pandora effect leads to a type of morbid curiosity. When Instagram blurs content and tells you the content might be sensitive, that can increase your desire to see it. This is especially likely given that sensitive content screens don’t tell you why the content was blurred. Instead of deterring you from clicking, not knowing what’s behind the screen can pique your curiosity and make you more likely to click.
The authors of this new research conducted two studies to test how Instagram users react to sensitive content screens. Both studies were pre-registered, which is a way for scientists to increase the trustworthiness of their research findings. In the first study, 260 U.S. residents between 20-71 years old (all of whom used Instagram) completed a survey about their Instagram use along with a variety of mental health measures.
The participants were shown a sample of a sensitive content screen from Instagram and asked the following: “Imagine you are scrolling (i.e., browsing) through Instagram posts and come across the following image. Would you click to uncover this image?” Participants rated how likely they would be to uncover the image on a six-point scale ranging from definitely no to definitely yes. If participants indicated they had come across similar sensitive content screens on Instagram in the past, they were asked if they typically uncover the images blocked by these screens.
The results were overwhelmingly clear. Participants showed a preference for uncovering blurred-out sensitive content – with 80 percent falling on the “uncover” side of the response scale. Those who had seen sensitive content screens on Instagram reported that they almost always uncovered the blurred-out images they came across.
Finally, just over half of the participants said they would get rid of these sensitive content screens altogether if they could (meaning no potentially disturbing content would be blurred). Participants experiencing higher levels of depression and stress were more likely to say they would uncover such images, whereas participants who scored higher on a measure of general well-being said they would be less likely to uncover such images – though these effects were small.
In a second study using a similar sample of respondents, participants completed a mock Instagram task. The researchers compiled a set of neutral and positive photos and placed each into an Instagram frame, so they looked like actual Instagram posts.
Participants had to hit a “next” button to see the next image. When participants came to an image that was blurred out by a sensitive-content screen, they could choose “uncover photo” or simply go to the next photo and skip the potentially disturbing content. (No participants were actually shown sensitive content; the task ended after they made the choice of whether to uncover the image.)
Similar to findings from the first study, 85 percent of participants attempted to uncover the sensitive image. In follow-up questions, this group of participants also said they almost always uncover content when they encounter these screens on Instagram.
In Study 2, researchers did not find links between participants' mental health and their likelihood of uncovering sensitive images. That might sound like good news, but it still means there was no evidence that the most vulnerable social media users might be more likely to benefit from sensitive content screens. At best, these users were just as likely as others to choose to uncover potentially disturbing content.
Overall, these research findings are in line with other studies suggesting that when you warn people about sensitive content, such warnings can backfire. For example, in an effort to reduce the negative impact of some media imagery on women’s body image, advertisements may feature warnings when a model’s body has been retouched. But research shows these warnings often lead to women focusing more on models' bodies, perhaps because they’re trying to determine where the models were airbrushed. Likewise, numerous studies of “trigger warnings” find that these warnings may actually increase anxiety and convince people that the material they are about to encounter is more harmful than it otherwise would have been.
Is there a way to mark potentially disturbing social media content without making people more curious to see it? Perhaps one day, researchers will discover such a technique, but they’ll be working hard against humans' strong desire to see the things that we’re told might harm us.