man feeling frustrated

TDL Brief: Online Toxicity

read time - icon

0 min read

Mar 01, 2020

Connect with people anywhere in the world, we have access to more knowledge than we could ever consume, and from gaming to Netflix, we have unlimited avenues of entertainment. Why, then, does the Internet  – a force for social connection, knowledge, and entertainment – breed a culture of negativity?

The term “online toxicity” encompasses rude, aggressive, and degrading attitudes and behavior that are exhibited on online platforms. It can range from excessive use of profanity to outright hate speech. It is generally observed in the context of online interactions between one or more individuals. The prevalence of the issue, coupled with its potentially detrimental consequences, has led to increasing concern in the past few years. A starting point for addressing this negative behavior has been to ask why it occurs in the first place. From there, behavioral scientists have attempted to develop interventions to cut it off. We still have a long way to go in that regard, with online toxicity running rampant on social media and in online gaming communities. However, through increased awareness of the consequences of toxic behavior and interventions based on screening algorithms and behavioral science, progress has been made.

How can we help?

TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.

Come say hello

1. The Online Disinhibition Effect

By: John Suler, “The Online Disinhibition Effect” in CyberPsychology & Behavior, July 2004

If you’ve ever encountered an instance of online toxicity, which, considering its pervasiveness, is incredibly likely, you may have found yourself wondering, “How could someone say something like that?” The fact of the matter is that most people couldn’t say something like that – at least, not in person. 

The way people behave on the Internet doesn’t always align with how they behave in face-to-face interactions. With the rise of social media, this discrepancy has piqued the interest of behavioral scientists. One explanation for this phenomenon has been dubbed the “online disinhibition effect.” 

The idea behind this effect is that the anonymity and physical disconnect of cyberspace, much like alcohol, impairs our inhibitions, making us feel less restrained and able to express ourselves more freely. In many cases, this disinhibition is “benign,” meaning that people simply feel more inclined to share aspects of their personal lives, be it secrets, hopes, dreams, or fears. These people may even be kinder and more generous online than they are in real life. However, the online disinhibition effect isn’t always positive. It can also lead to “toxic” disinhibition, leading to online toxicity. This toxic response is encouraged by the fact that bad behavior online often goes unpunished. People who exhibit toxic disinhibition may be ruder, more critical, and more openly hateful online than they would be under other circumstances. 

This raises the question of whether the disinhibition experienced online reflects who we truly are. However, matters of self-definition are never so simple. Personality is in constant interaction with the environment: different situations or interpersonal interactions elicit different responses from us. While we should all still be held accountable for our actions online, it is likely that the disinhibition we experience online is not a revelation of our true selves but rather a product of the environmental context of the Internet.

2. Behavioral Contagion of Online Toxicity

By: Cuihua Shen et al., “Viral vitriol: Predictors and contagion of online toxicity in World of Tanks” in Computers in Human Behavior, July 2020

Behavioral, or social, contagion is a form of social influence that describes how behaviors, norms, and information are propagated through social networks. One example of this is emotional contagion, which is a phenomenon by which our own mood can be influenced by the moods of the people around us. Another example is that of group norms, which refers to the behaviors which are considered normal or acceptable within a given group. Both emotional contagion and group norms heavily contribute to online toxicity. 

Group norms may drive online toxicity through a behavioral model called Social Cognitive Theory, which was developed by Albert Bandura, who is a famous figure in the field of psychology. Social Cognitive Theory posits that as children, we learn which behaviors are or are not desirable by observing the behavior of others. For example, watching your older brother treat someone badly and feel badly or suffer a consequence in return would teach you not to display those actions in your own life.

This theory proves problematic in online environments where people are often allowed to exhibit toxic behavior without any real negative consequences. In fact, some contexts actually reward this behavior, making it seem desirable and thereby encouraging more people to partake in it. This can be seen in online gaming communities where online toxicity is considered the norm. 

Emotional contagion also underlies online toxicity. The emotions of the people we interact with online can influence our own mood. From there, our positive or negative mood can influence our thoughts, attitudes, and further behaviors on the Internet. In many ways, it offers up a vicious cycle.

A study conducted looking at data from a multiplayer online game demonstrated that exposure to online toxicity from others is associated with exhibiting such behavior oneself. This provides evidence for the hypothesis that online toxicity can be the result of behavioral contagion in the context of online gaming. It is likely that the same effect is driving online toxicity in other areas of the Internet.

3. The Fight Against Online Toxicity

By: Kalev Leetaru, “Is it Actually Possible to Solve Online Toxicity?”, Forbes, June 2019 

It may take place on the Internet, but online toxicity has serious real-world consequences. In the past few years, significant efforts have been dedicated to stopping it, yet it continues to propagate. 

The most common solution to the issue of online toxicity is the implementation of algorithms that detect and flag content that is explicitly hateful or violent – such as overt racism, sexism, homophobia, Islamophobia, and the like. However, what these algorithms fail to detect is the so-called “casual toxicity”: the bullying, criticism, and harassment that make no reference to the demographics of the victim in question. Of course, just because the explicit terms are not used, doesn’t mean that this harassment is not still the result of prejudice against certain groups. In fact, often, demographics may be brought up in ways that are indirect enough to fly under the radar. 

Even though “casual toxicity” is not classified as overt hate speech, it can still be incredibly harmful and traumatic to its receivers. The content rules and filtering algorithms that have been put in place to prevent online toxicity are a step in the right direction, but they are not enough, currently, to sufficiently protect targeted Internet users.  

4. Behavioral Science Interventions in Online Gaming

By: Daisy Soderberg-Rivkin, “How Riot Games Used Behavioral Science to Curb League of Legends Toxicity”, Spectrum Labs, March 2020

Although online toxicity is prevalent all over the Internet, it is particularly common in the realm of online gaming. As such, it isn’t surprising to see that a lot of the research conducted on online toxicity and many of the interventions developed to combat it relate to online gaming communities. 

Recognizing that online toxicity is a major problem in their popular online game, League of Legends, executives at Riot Games implemented three interventions based on behavioral science with the goal of reducing it. 

The first intervention was to give players the option to turn off chat with the team they are playing against. After one week, the executives saw that negative chat decreased by almost 33% and positive increased by 35%. The overall number of conversations didn’t drop, meaning that players were still interacting, just without as much negativity. 

The second intervention they tried was called “The Tribunal.” When a player was reported for negative behavior, their chat logs were made public and the community could vote as to whether they thought the behavior was in fact toxic. Reported players would be sent “reform cards,” which explicitly stated what they had done wrong. It was found that the community votes generally reached the same conclusion that the Riot executives did, which they took as a sign that players were learning how to accurately identify toxic behavior. Additionally, after players returned from their ban, they exhibited less negative behavior and many of them even apologized for their past actions. 

The third and final intervention Riot Games implemented is referred to as their “Optimus Experiment.” They used the psychological concept of priming – which is the idea that exposure to one stimulus can influence our response to a second stimulus – by showing players positive or negative statistics and varying the colors and locations in which they were presented. Depending on the type of prime they were exposed to, players demonstrated changes in their levels of verbal abuse, offensive language, and negative attitudes.

While these findings are specific to League of Legends, they are an example of the kinds of steps that can be taken to successfully reduce online toxicity. 

References

  1. Suler, J. (2004). The Online Disinhibition Effect. CyberPsychology & Behavior7(3), 321-326. DOI: 10.1089/1094931041291295
  2. Shen, C., Sun, Q., Kim, T., Wolff, G., Ratan, R., and Williams, D. (2020). Viral vitriol: Predictors and contagion of online toxicity in World of Tanks. Computers in Human Behavior, 108. https://doi.org/10.1016/j.chb.2020.106343
  3. Leetaru, K. (2019). “Is it Actually Possible to Solve Online Toxicity?”. Forbes.comhttps://www.forbes.com/sites/kalevleetaru/2019/06/13/is-it-actually-possible-to-solve-online-toxicity/?sh=670be169686c
  4. Soderberg-Rivkin, D. (2020). How Riot Games Used Behavioral Science to Curb League of Legends Toxicity. SpectrumLabsAI.com. https://www.spectrumlabsai.com/the-blog/how-riot-games-is-used-behavior-science-to-curb-league-of-legends-toxicity

About the Author

The Decision Lab

The Decision Lab

The Decision Lab is a Canadian think-tank dedicated to democratizing behavioral science through research and analysis. We apply behavioral science to create social good in the public and private sectors.

Read Next

Insight

Thinking Outside the App

How can we bridge the gap between virtual interfaces and real-world experiences? We need to design beyond the screen.

Notes illustration

Eager to learn about how behavioral science can help your organization?