Skip to main content

Fighting Back Against Online Radicalization

Right-wing extremist violence is an extremely dangerous and growing threat to the U.S. This heightened danger it poses led to its classification by the Department of Homeland Security as a top threat in its 2024 Homeland Threat Assessment. Commonly espousing a combination of white nationalist, anti-government, neo-Nazi, misogynistic, anti-LGBTQ+, and other ideologies, right-wing extremism and extremist violence is not a new phenomenon in the United States.  Extremist rhetoric and ideas can be seen percolating in the mainstream, taking ideologies that were once seen as fringe, now thrust into the public spotlight.  The internet has played a crucial role in this transformation, providing people with a place to not only espouse these ideologies, but connect with people from across the globe who share their beliefs. Organizing amongst extremist and radical groups has become significantly easier while extreme ideologies root themselves deeper in American society. As such, it is increasingly difficult to combat the rising threat of right-wing extremist violence. As such, it is necessary that there be increased moderation of the content that is found online to fight back against radicalization and right-wing extremist violence.

Online forums, private chat rooms, and discussion boards provide people harboring extremist ideologies with the ability to connect in ways they would be unable to otherwise. Additionally, across these connections, they are now able to inspire and challenge one another, invoking the actions of previous perpetrators of extremist violence. The perpetrator of the 2014 shooting in Isla Vista, Calif., was known to have frequented online forums expressing extreme misogyny. Being exposed to such ideas reinforced his already troubling views, which ultimately culminated in a 107,000-word manifesto expressing his extremely misogynistic and racist views. The shooter responsible for the deaths of two students at a New Mexico high school had been known to have multiple online personas in numerous forums known for espousing racist content, all idolizing Rodger and other school shooters, some even referring to himself as a “future school shooter.” These online platforms are not limited to exclusively Americans. A Canadian who killed six people at a Quebec City mosque in 2017, and an Australian who killed 51 people at two mosques in New Zealand in 2019, were each known to have interacted with online platforms espousing extremist ideologies.

Connections made through the Internet allow people of similar extreme ideologies to meet and provide the ability to plan critical logistical elements for acts of right-wing violent extremism. In the time leading up to the January 6 attack on the United States Capitol, extremist groups used Facebook as a tool to organize the attack, as well as recruit new members and supporters. Extremist groups have also been known to use the Internet to fundraise, creating online merchandise stores and a system of membership dues and donations payable online, expanding their potential revenue streams.

These platforms also serve as a means of desensitizing those involved in violence and extremism, with video game terminology and other elements of Internet culture being familiar sights. Frequent users of these platforms often discuss wanting to beat other extremists’ “high scores” or “records.” Users have also been known to create memes glorifying extremist violence which they then share on these platforms or have used slang commonly found online to discuss their views. The 2019 Christchurch shooter, whose attack was livestreamed on Facebook, joked about having been trained by the popular video game Fortnite and reminded his audience to subscribe to popular Swedish YouTube personality PewDiePie, a common refrain online at the time of the attack. Ever since the shooting, members of online forum the shooter was known to frequent have posted numerous comments discussing his “score,” and their desire to “beat it.”

The Internet fundamentally alters how people connect and has created a new medium through which people can socialize and spread extremist ideology beyond in-person connections. Finding content or groups that promote extremist ideas is easier than ever with the vast amount of computing power that people have in their pockets. People are now able to access extremist content from anywhere while doing so with almost perfect anonymity. Manifestos written by the perpetrators of violent extremism are readily available online, both in their entirety and in quotes found across various online groups and platforms. In the case of the perpetrator of the 2015 Charleston Church shooting, he was not associated with any extremist groups, but a Google search of “black on White crime” led him directly to the Council of Conservative Citizens website, which he credits as being a major factor in his actions.

The internet poses significant risks beyond the extremist online forums and platforms where one could expect to find more right-wing extremist views and ideas. On the surface, basic social media posts and interactions can expose people to a variety of extremist ideologies under the guise of Internet humor or “trolling” culture. While seemingly harmless, interactions of this sort do influence content algorithms, leading to continued exposure to harmful content and content that is increasingly more direct in its expression of these ideas. One “innocent” meme could lead to five more, which could lead to a video, that then links to a forum or a discussion board, which then only leads further down the rabbit hole.

Approximately half the adult population in the U.S. relies on the internet and social media for their news and information. However, with the objective of these platforms being to attract viewers and get clicks, sensationalized content is promoted, regardless of its validity. This eye-catching content is also the content that average users are prone to sharing within their own networks. This in turn promotes acceptance of extremist ideas, which are in nature eye-catching and hyperbolic, as over time, constantly being confronted with them, people are desensitized to how out of the norm they are.

As ideas permeate the mainstream and lose their extreme connotations, actively trying to defend against these fringe ideologies is becoming more challenging. This process is only made more difficult by the presence of the Internet. Today people are more exposed to these types of content and ideas, and at increasingly younger ages, which through the sheer quantity of them artificially creates a sense of acceptance for them. Consequently, a vicious cycle has been created wherein the Internet has aided in creating a newfound acceptance of these ideas, which, once in the mainstream encourages their continued and expanded propagation online. Breaking this cycle is crucial, but increasingly complicated.

The internet has created a wide range of ways for right-wing extremism to flourish, both in ideas and in actions, thus effectively combatting it requires consideration of the dangers that the Internet poses. The U.S. has already taken steps in the right direction, notably becoming one of 55 countries that has joined the Christchurch Call to eliminate terrorist and extremist content online. However, greater emphasis must be put not only on responding to users who create extremist content, but also, on the moderation of content online.

Key to this is holding social media companies and other online platforms accountable for the content that they promote. This starts with first developing federal standards for what constitutes extremist, violent, and hateful content to ensure that all content across all platforms is judged based on the same criteria. Critically, this must clearly define what is and is not protected under free speech in a nonpartisan manner. With these standards as a starting block, additional action to require increased transparency from social media companies and other online platforms regarding their content promotion algorithms to provide a better picture of how these organizations are promoting certain content would then be necessary. Lastly, reforms are necessary to Section 230 of the Communications Decency Act to break down the legal shields that have long protected social media companies and other online platforms from liability when they host or disseminate content that is deemed to meet the aforementioned standards for hateful, violent, and extremist content. Ultimately, as the internet continues to evolve, so too must the efforts to ensure it remains a place where right-wing extremism is not allowed to flourish.

DISCLAIMER: McCain Institute is a nonpartisan organization that is part of Arizona State University. The views expressed in this blog are solely those of the author and do not represent an opinion of the McCain Institute.

Noah Hersh, Junior Fellow, Preventing Targeted Violence
Publish Date
December 6, 2023