
Gaming Platforms Used for Extremist Recruitment, Study Finds
A study by researchers from Anglia Ruskin University finds extremist groups are exploiting platforms like Twitch, Discord, and Steam to recruit impressionable users. The authors attribute these results partly to the platforms’ issues with moderation.
The study, published July 30 in the “Frontiers in Psychology” journal, compiled interviews with content moderators, tech industry experts, and extremism experts, and interpreted the results in tandem with terms of service documentation and previous research on the topic.
According to co-authors Dr. William Allchorn and Dr. Elisa Orofino, users with a focus on “hyper-masculine gaming titles” were particularly vulnerable to recruitment within these groups, who mostly espoused far-right extremist rhetoric, including the promotion of white supremacy, homophobia, misogyny, and racism. The study also found instances of Islamist extremism, although less common.
This type of content was repeatedly found to be explicitly against the platforms’ stated terms of service, but proliferated nonetheless due to lackluster moderation.
“Additionally, and quite pertinently, interviewees noted narratives and online content on gaming-adjacent platforms cleaving to ‘extremist-adjacent’ fixations and egregious content that most definitely broke platform terms of service, such as child sexual abuse materials (CSAM), fixation on school shootings, and graphic depictions of sexual content and violence,” reads the paper.
The study suggests that these groups make initial contact with potential recruits through in-game interactions, through the discussion of shared interest in the contents or themes of the game. After that, recruiters funnel the interactions to “less regulated” platforms where they can share materials and propaganda more freely.
“These gaming-adjacent platforms offer extremists direct access to large, often young and impressionable audiences and they have become a key tool for extremist recruitment,” said Allchorn. “Social media platforms have attracted most of the attention of lawmakers and regulators over the last decade, but these platforms have largely flown under the radar, while at the same time becoming digital playgrounds for extremists to exploit.”
“Strengthening moderation systems, both AI and human, is essential, as is updating platform policies to address content that is harmful but technically lawful,” he added.
The unmediated spread of harmful rhetoric in online spaces has been a pressing issue for politicians and experts worldwide. Last month, xAI had to publish an official apology after its “grok” AI chatbot started spouting antisemitic and sexually violent statements, seemingly out of the blue.