[go: up one dir, main page]

Hate “Clusters” Spread Disinformation Across Social media. Mapping Their Networks Could Disrupt Their Reach.

Jigsaw
Jigsaw
Published in
6 min readJul 28, 2021

--

Online content increasingly travels across platforms — memes go viral, birthday invitations are sent to friends in different places, and a song uploaded one place is re-shared on many others. Harmful content, like misinformation and extremist propaganda, also travels across platforms. Extremists and people who spread misinformation are like the rest of us in that they use more than one online platform to talk with their friends and collaborators. Jigsaw’s interviews with violent white supremacists found that they use different online platforms for complementary purposes, like recruiting new followers and coordinating private events.

Despite the networked nature of this content, most studies have focused their research on individual platforms, limiting our understanding of how misinformation and violent extremism travels across social ecosystems. This research has helped some platforms improve their content moderation efforts, however, addressing just one platform can be limiting: numerous studies show the removed content often simply resurfaces on another platform.

Malicious actors, such as violent white supremacists and people who knowingly spread COVID-19 disinformation, compound this by operating primarily in loose social networks rather than cohesive groups. This informal structure means that the name or logo used by a group on one platform may look different on another, making them more difficult to observe across the internet. To tackle these issues effectively, a dynamic, internet-wide approach is necessary that can capture malicious actors’ informal, decentralized networks.

A new study directly addresses the decentralized nature of harmful content by applying a novel “cluster” analysis methodology. An interdisciplinary team of social scientists and natural scientists at George Washington University developed this approach to study ISIS online in 2016. They identified 196 pro-ISIS “clusters”, or groups of individual users on a social media platform, as ISIS was rising in prominence and attracting recruits online internationally. They found that these self-organizing clusters behaved much more fluidly than traditional, hierarchical militant groups and therefore better captured how modern extremist movements were operating online. This “meso-level” analysis of extremist clusters — looking neither at the micro-level individual extremist nor at the macro-level global offline movement — offered a valuable window into the movement’s future; the team found that cluster activity could help predict real-world attacks.

In 2019, Jigsaw teamed up with the George Washington University team to apply this methodology in the context of hate speech and COVID-19 misinformation. Cluster mapping was a fitting approach for both the white supremacist movement and medical disinformants due to their often informally organized, decentralized online networks. We mapped these movements at the ecosystem level, looking at six platforms that included both mainstream and fringe social media networks frequented by violent white supremacists and disinformation purveyors: Facebook, Instagram, Gab, 4chan, Telegram, and VKontakte. Platform selection was determined by whether groups of 3 or more users engage in sustained, public discussion with one another, and the availability of data.

In order to first map the network of “hate clusters” and COVID-19 misinformation spreading clusters, the team at GWU manually identified 1,245 public “clusters” that actively spread hate and misinformation. We labeled clusters as hateful when two out of the 20 most recent posts at the time of classification include hate content. Hate content was defined as advocating for hatred, hostility, or violence toward members of a race, ethnicity, nation, religion, gender, gender identity, sexual orientation, immigration status, or other defined sector of society as detailed in the FBI definition of Hate Crime. These clusters contained public posts expressing a range of extremist ideologies including but not limited to neo-Nazism, Islamophobia, and male supremacy. Clusters were identified in the six languages most frequently used by the white supremacist movement: English, French, German, Portuguese, Russian, and Spanish. 29.8 million posts were collected from these clusters between June 1, 2019 and March 23, 2020.

Network map where nodes represent a cluster or group of social media users posting hateful content and edges represent hyperlinks posted by the clusters linking to one another. Data from June 1-Dec 30 2019.

We mapped the hate cluster network by looking at the hyperlinks between clusters, and found that harmful content, including hateful posts and COVID-19 misinformation narratives, spreads quickly between platforms. Hyperlinks facilitate this by acting like “wormholes” that transport users between platforms in a click that crosses space, time, and moderation regimes. This means that users and content move between moderated and unmoderated platforms connected in the network.

A qualitative analysis of the posts by hate clusters in this study, and other studies of the white supremacist online movement, show that this decentralized design is intentional. Malicious actors create redundant connections to one another across platforms to subvert content moderation efforts and gain resilience to deplatforming. These connections between mainstream and fringe platforms are also designed to funnel recruits or curious potential members from the mainstream into unmoderated spaces.

As the study authors note:

An extremist group has incentives to maintain a presence on a mainstream platform (e.g., Facebook Page) where it shares incendiary news stories and provocative memes to draw in new followers. Then once they have built interest and gained the trust of those new followers, the most active members and page administrators direct the vetted potential recruits towards their accounts in less-moderated platforms such as Telegram and Gab, where they can connect among themselves and more openly discuss hateful and extreme ideologies.”

The study found two platforms emerged as the core of the network: Telegram and VKontakte. These were the only two platforms with connections to all of the others, and the most diverse representation of languages. A number of clusters were multilingual with some acting as repositories for translations of English extremist content into other languages. There were strong links between English, German, and Spanish clusters.

These new findings reinforce the value of platforms taking an ecosystem-level lens to tackle the issues of violent extremism and misinformation. One promising observation from this ecosystem mapping study is that moderation efforts to block hyperlinks to unmoderated platforms can help. Blocking outbound links can isolate those clusters from the rest of the extremist network and add friction that potentially deters those en route to harmful content. Removing links can also serve to weaken the redundancies of the hate network.

For example, the study notes how Facebook has blocked posts with hyperlinks into 4chan. While it is still possible for a Facebook user to end up on 4chan in two steps — first via a link to a platform like VKontakte and then via a second link to 4chan — this mediated pathway adds friction and makes it harder for malicious actors to spread harmful content to mainstream platforms. This analysis suggests that link removal can have a powerful impact on the online hate ecosystem, visualized in the network graph above, where the Facebook nodes in blue are isolated on the fringes of the network.

Finally, taking this ecosystem level view over time allows us to see how real-world events, like the COVID-19 pandemic, affect malicious actors’ online behavior. We were able to capture how COVID-19 misinformation posts in hate clusters expanded rapidly during the pandemic’s emergence, in December 2019 through February 2020. In this ecosystem, COVID-19 misinformation appeared on 4chan weeks before it spread to other platforms, which suggests that real-time analysis of this network could have produced a warning by Jan 1, 2020, that the COVID-19 health crisis would be capitalized on by hate actors. In the future, this type of network level content analysis could support early warning systems for emerging threats.

We were able to capture how COVID-19 misinformation posts in hate clusters expanded rapidly during the pandemic’s emergence, in December 2019 through February 2020.

Mapping the online hate network is a valuable starting point to addressing hate and misinformation online, but questions remain about how to address the spread of such content. For example, how do these clusters respond to real-world events or de-platforming events and what types of harmful content are spread on which platforms?

Followup research is ongoing to investigate the nature of the content posted across this network, specifically to model its toxicity and types of extremist rhetoric.

Visit our Medium section on Violent Extremism to read about more of our past research.

By Beth Goldberg, Research Program Manager at Jigsaw

--

--

Jigsaw
Jigsaw

Jigsaw is a unit within Google that explores threats to open societies, and builds technology that inspires scalable solutions.