BETA
This is a BETA experience. You may opt-out by clicking here

Breaking

Edit Story

Real-World Events Drive Increases In Online Hate Speech, Study Finds

Following

Topline

Real-world events like elections and protests can lead to spikes in online hate speech on mainstream and fringe platforms alike, a study published Wednesday in the journal PLOS ONE found, with hate posts surging even as many social media platforms try to crack down.

Key Facts

Using machine-learning analysis—a way of analyzing data that automates model building—researchers looked at seven kinds of online hate speech in 59 million posts by users of 1,150 online hate communities, online forums in which hate speech is most likely to be used, including on sites like Facebook, Instagram, 4Chan and Telegram.

The total number of posts including hate speech in a seven-day rolling average trended upward over the course of the study, which ran from June 2019 to December 2020, increasing by 67% from 60,000 to 100,000 daily posts.

Sometimes social media users' hate speech grew to encompass groups that were uninvolved in the real world events of the time.

Among the instances researchers noted was a rise in religious hate speech and anti-semitism after the U.S. assassination of Iranian General Qasem Soleimani in early 2020, and a rise in religious and gender hate speech after the November 2020 U.S. election, during which Kamala Harris was elected as the first female vice president.

Despite individual platforms’ efforts to remove hate speech, online hate speech continued to persist, according to researchers.

Researchers pointed to media attention as one key factor in driving hate-related posts: For example, there was little media attention when Breonna Taylor was first killed by police, and thus researchers found minimal online hate speech, but when George Floyd was killed months later and media attention grew, so did hate speech.

Big Number

250%. That’s how much the rate of racial hate speech increased after the murder of George Floyd. It was the biggest spike in hate speech researchers found within the study period.

Key Background

Hateful speech has vexed social networks for years: Platforms like Facebook and Twitter have policies banning hateful speech and have pledged to remove offensive content, but that hasn’t eliminated the spread of these posts. Earlier this month, nearly two dozen UN-appointed independent human rights experts urged more accountability from social media platforms to reduce the amount of online hate speech. And human rights experts aren’t alone in their desire for social media companies to do more: A December USA Today-Suffolk University survey found 52% of respondents said social media platforms should restrict hateful and inaccurate content, while 38% say sites should be an open forum.

Tangent

Days after billionaire Elon Musk closed his deal to buy Twitter last year, promising a relaxing of the site’s moderation policies, the site saw a “surge in hateful conduct,” according to Yoel Roth, Twitter’s former head of safety and integrity. At the time Roth, tweeted that the safety team took down more than 1,500 accounts for hateful conduct in a three day period. Musk has faced sharp criticism from advocacy groups who argue that under Musk’s leadership, and with relaxing of speech regulations, the volume of hate speech on Twitter has grown dramatically, though Musk has insisted impressions on hateful tweets have declined.

Further Reading

Twitter Safety Head Admits ‘Surge In Hateful Conduct’ As Form Reportedly Limits Access To Moderation Tools (Forbes)

Some Reservations About A Consistency requirement For Social Media Content Moderation Decisions. (Forbes)

What Should Policymakers Do To Encourage Better Platform Content Moderation? (Forbes)

Send me a secure tip