Skip to main content

Over the past two years, several major social media platforms like X, Meta, and others have undergone major layoffs and restructuring. A common theme has been dramatic cuts to their trust and safety, content moderation, and policy teams responsible for combating misinformation, hate speech, and extremist content on their platforms.

X, under its new ownership, gutted teams across the board – including their trust and safety team. Reporting has shown that the team went from around 230 employees to somewhere around 20. Meta cut a staggering 21,000 employees, with teams focused on election integrity and fighting misinformation hit hard. Google cut the staff of Jigsaw, which produces tools to counter online hate speech and disinformation, by at least a third. 

So over a year later, how have those targeted cuts been playing out? The impacts have been severe and increasingly visible. With fewer expert moderators and depleted policy teams, extremist groups have quickly seized the opportunity to flood social media with hateful rhetoric, misinformation campaigns, and efforts to radicalize new followers. 

Across platforms, extremist influencers can more freely spread racist ideologies, recruit for their movements, and rewrite reality. Extremist militias have been openly recruiting and organizing on Facebook. 50% of those in the online gaming community have experienced hate speech. On Twitter, once-banned extremists are being re-platformed and safeguards against hate and misinformation have crumbled. It’s no wonder that extremism is US voters’ most pressing worry. 

X announced in January that they would reinstate content moderation by implementing a 100-person “center of excellence,” but there appears to be no real evidence that those plans have come to fruition. In fact, NBC News recently found that “X has been placing advertisements in the search results for at least 20 hashtags used to promote racist and antisemitic extremism, including #whitepower, according to a review of the platform.” The placements demonstrate that X continues to monetize extremist content despite Elon Musk’s promises to demonetize hate posts on the platform he owns and using his account to promote the “Great Replacement Theory” and antisemitism. 

This month, unmitigated online extremism spilled over into real life after the tragic stabbing death of three children in the UK when “posts on TikTok, YouTube, X and Telegram circulated false or unsubstantiated claims that the attacker was a Syrian refugee, when in fact he was from Wales.” The Institute for Strategic Dialogue tracked the origin of the claims to a post on X:

“In a now-deleted post, one X user shared a screenshot of a LinkedIn post from a man who claimed to be the parent of two children present at the attack, in which he alleged that the attacker was a ‘migrant’ and advocated for ‘clos[ing] the borders completely.’”

The false claim was then amplified by fake news accounts and the false name of the attacker became a trending topic on the platform in the UK. Far-right agitators, in turn, used the same platforms to mobilize rioters who attacked police, mosques, businesses owned by immigrants, and even stormed hotels housing asylum seekers across more than a dozen cities and towns in the UK. Musk’s response to the riots? “civil war is inevitable.”

Make no mistake: online extremism predates the recent slashing of trust and safety teams–it’s why they were created in the first place. But these platforms have “become the frontline of modern terrorism and counterterrorism.” Chopping away at teams responsible for safeguarding that space, seemingly in the name of cost-cutting over public safety, will only continue to fan the flames of extremism that we see blooming across the internet and spilling over into everyday life.