- Social media use can have negative effects on the body image and mental health of young people.
- Social media companies have attempted to reduce exposure to potentially harmful content.
- Research shows that effectively reducing exposure to this potentially harmful content is very difficult.
- Social media use has benefits, and users should curate their feed to improve their social media experience.
This was co-written with Dr. Gemma Sharp, with Dr. Sharp taking the lead.
In September 2021, Frances Haugen leaked internal Facebook documents reporting that Instagram “make[s] body image issues worse for one in three teen girls.” But it seemed like few were really surprised by this news. Most people already thought there was a body image “problem” on social media.
Those of us conducting body image research have been publishing articles for over a decade examining the potential for social media to negatively impact body image. Some activities, such as posting and commenting on selfies, have been found to be among the most seemingly problematic forms of social media engagement because they seem to explicitly promote social comparison and unattainable beauty standards.
Rather than telling body image scientists (as well as the general public) something we didn’t already know, the leaked documents prompted a range of big picture questions. Scientists–as well as journalists, politicians, and parents, were among those asking, "what do we do about this problem," "whose job is it to fix it?" and "what do we do in the meantime–Pandora's box is already open?”
The answers to these questions are still very much evolving, but it’s important to look back on what has been done so far on social media platforms to address the body image “problem” and how we can learn from this research moving forward.
Body Image Regulation Started on Social Media Ten Plus Years Ago
The story begins in earnest over a decade ago (February 2012), when the Huffington Post published an exposé on the “secret world” of Tumblr’s pro-eating disorder (ED) blogs. By pro-ED, we mean content that promotes thinness, provides advice to other members such as weight-loss strategies, and promotes underweight as the ideal. Of course, it was not just Tumblr with this pro-ED problem, it was emerging on other, newer social media platforms like Instagram and Pinterest, and pro-ED spaces existed in the 1990s as pro-anorexia (pro-ana) websites emerged.
The difference in 2012 was the very vocal and public response from the social media platforms—their proposed solution to this pro-ED problem-focused, specifically on hashtags. Hashtags enable the organization of content into one space and tell viewers (i.e., other users) what a post is about. The platforms implemented a function whereby if users searched for pro-ED content via hashtags like #proana (pro-anorexia) and #thinspiration. A public service announcement would pop up and encourage the user to contact the eating disorder support service in their local area. The platforms also completely blocked search results for certain hashtags. (These policies remain in place at the time of our writing of this article.) These strategies aimed to reduce user exposure–and thus harm–to pro-ED content on social media platforms.
Blocking hashtags to reduce exposure to certain content is a flawed strategy. For example, research has shown that less than 30 percent of this pro-ED content is actually marked with hashtags. Pro-ED communities know that social media content moderators are on the lookout for hashtags, so they simply don’t use them.
They use other ways to signal to other users that they are a member of these communities, such as commonly used weight-related comments in their profile biographies. It is also very simple for hashtags to be changed by users and thus evade the platform’s moderation. For example, #proana can have an extra ‘n’ added to it to become #proanna. Alternatively, an entirely unrelated hashtag can be used by pro-ED community members like #secretsociety, which will not necessarily be noticed as problematic by content moderators.
More Recent Regulation Attempts
Since 2012, social media platforms have publicly announced other types of bans on content, such as Instagram and Facebook’s ban of pro-diet and cosmetic surgery content in 2019 for users under 18 years. But these bans face the same limitations as the hashtag strategy; there is no way to remove all potentially harmful content, and this content needs to be correctly labeled as harmful by human content moderators in the first place, who literally have “seconds” to make these decisions.
Another strategy that recently (2021) made international headlines was Norway passing legislation that images that have been digitally altered to change a person’s shape or size must be labeled as such. Israel (2012) and France (2017) previously implemented this initiative. It is intuitive that if people know that the images on social media are not real, they will not compare themselves to them. However, unfortunately, research has shown that viewers tend to compare themselves to labeled images. It seems that images are far more powerful than words, so a disclaimer label of a few words doesn’t deter social comparison. These images still set the standard for beauty – even when they are explicitly labeled as fake (and, therefore, unattainable).
Where does this leave us?
Social media has evolved far quicker than policies and regulations around the safety of its use. Body image scientists are working very hard to produce and publish high-quality research. However, the research process takes years, and by the time results are published, social media landscape has often moved on. Who would have thought a platform like TikTok originally featuring short dance videos would become so popular in the last few years!
The body image “problem” on social media is incredibly complex, and blaming social media companies will not solve this issue. They are continually undertaking research with users and have implemented strategies (albeit with seemingly limited success). Thus, it seems that users will need to determine how to limit the harm of social media use on body image for the time being.
The last few years have truly highlighted the importance of connecting with others online, when in-person contact was not permitted, by creating and sharing fun content on social media. When we engage with the content we enjoy, we are teaching the algorithms that drive our social media feed to provide us with more of this type of content. Users have much more control in curating their social media news feeds than they often seem to realize.
Many of us will continue to work on the big picture questions regarding what to do with the body image “problem” on social media. In the meantime, users should employ protective filtering. Protective filtering refers to protecting one’s body image (or mental health in general) by filtering out influences that are detrimental. Celebrities, influencers, and thinspiration content can be unfollowed and blocked. Taking control of our social media experiences can help make these platforms a source of connection, information, and positivity.
However, to finish this article with such simple advice perhaps does not do justice to the significant complexity and ever-evolving nature of the issue. In order to truly understand how the content on these platforms may or may not (as some scholars argue) be causing harm to young people and their sense of body image, we need to start appreciating the nuances of the social media landscape. So the answers to the big picture questions of “what do we do about the problem” and “whose job is it to fix it?” will need to be complex and multifaceted. Watch this space…