Platforms Are Fighting Online Abuse—but Not the Right Kind

For some people, particularly marginalized groups, harassment is a chronic problem. But the best tools to help them only work for “acute” situations.
Photo collage of the feather end of an arrow with a cursor as the arrow point
Photo-illustration: WIRED Staff; Getty Images

We are all at risk of experiencing occasional harassment—but for some, harassment is an everyday part of life online. In particular, many women in public life experience chronic abuse: ongoing, unrelenting, and often coordinated attacks that are threatening and frequently sexual and explicit. Scottish Prime Minister Nicola Sturgeon and former New Zealand Prime Minister Jacinda Ardern, for example, have both suffered widely reported abuses online. Similarly, a recent UNESCO report detailing online violence against women journalists found that Nobel Prize–winning journalist Maria Ressa and UK journalist Carole Cadwalladr faced attacks that were “constant and sustained, with several peaks per month delivering intense abuse.” 

We, two researchers and practitioners who study the responsible use of technology and work with social media companies, call this chronic abuse, because there is not one single triggering moment, debate, or position that sparks the steady blaze of attacks. But much of the conversation around online abuse—and, more critically, the tools we have to address it—focuses on what we call the acute cases. Acute abuse is often a response to a debate, a position, or an idea: a polarizing tweet, a new book or article, some public statement. Acute abuse eventually dies down.

Platforms have dedicated resources to help address acute abuse. Users under attack can block individuals outright and mute content or other accounts, moves that ensure they’re able to exist on the platform but shield them from content that they do not want to see. They can limit interactions with people outside their networks using tools like closed messages and private accounts. There are also third-party applications that attempt to address this gap by proactively muting or filtering content. 

These tools work well for dealing with episodic attacks. But for journalists, politicians, scientists, actors—anyone, really, who relies on connecting online to do their jobs—they are woefully insufficient. Blocking and muting do little for ongoing coordinated attacks, as entire groups maintain a continuous stream of harassment from different accounts. Even when users successfully block their harassers, the ongoing mental health impact of seeing a deluge of attacks is immense; in other words, the damage is already done. These are retroactive tools, useful only after someone has been harmed. Closing direct messages and making an account private can protect the victim of an acute attack; they can go public again after the harassment subsides. But these are not realistic options for the chronically abused, as over time they only remove people from broader online discourse.

Platforms need to do more to enhance safety-by-design, including upstream solutions such as improving human content moderation, dealing with user complaints more effectively, and pushing for better systems to take care of users who face chronic abuse. Organizations like Glitch are working to educate people about the online abuse of women and marginalized people while providing resources to help people tackle these attacks, including adapting bystander training techniques for the online world, pushing platform companies to improve their reporting mechanisms, and urging policy change.

But toolkits and guidance, while extremely helpful, still place the burden of responsibility on the shoulders of the abused. Policymakers must also do their part to hold platforms responsible for combating chronic abuse. The UK’s Online Safety Bill is one mechanism that could hold platforms responsible for tamping down abuse. The bill would force large companies to make their policies on removing abusive content and blocking abusers clearer in their terms of service. It would also legally require companies to offer users optional tools that help them control the content that they see on social media. However, debate of the bill has weakened some proposed protections of adults in the name of freedom of expression, and the bill still focuses on tools that help users make choices, rather than tools and solutions that work to stop abuse upstream.

In the US, the proposed Platform Accountability and Transparency Act would let the US Federal Trade Commission require that external researchers have access to data to hold platforms accountable. If passed, these researchers could, for example, audit models intended to identify accounts generating toxic speech to ensure these models are working as they should. These steps are needed especially now, as Twitter has recently revoked journalists’ and researchers’ free access to APIs, endangering an important tool for understanding misinformation, disinformation, and coordinated harassment campaigns. These tools are still in their early days. It is not yet defined who should get access to data and to what extent, or precisely how platforms should be held accountable by external parties. We encourage lawmakers to provide specific language that addresses chronic harassment, and to require data-sharing to trusted third parties and researchers so external groups can help hold platforms accountable and monitor the health of our online communities.

The idea that people should shrug off abuse as the price of being online is based on the wrong-headed assumption that harassment is an acute problem rather than a chronic one. So, too, is the idea that individuals should shoulder the work of fighting online abuse. We’ve already seen the impact of chronic abuse on prominent women. These women were brave enough to speak about their experiences. If platforms and policymakers do not address this issue now, countless unseen others will be silenced.