Twitter’s new Violent Speech Policy looks a lot like the old one

Both policies ban you from threatening or glorifying violence in most scenarios (each version has carve-outs for “hyperbolic” speech between friends). However, the new set of rules appears to expand on some concepts while cutting down on some others. For example, the old policy stated:

Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not actionable under this policy, but may be reviewed and actioned under those policies. 

However, wishing someone harm is covered by the new policy, which reads:

You may not wish, hope, or express desire for harm. This includes (but is not limited to) hoping for others to die, suffer illnesses, tragic incidents, or experience other physically harmful consequences.

Except “new” is a bit of a misnomer here because pretty much that exact policy was expressed in the old abusive behavior rules — the only meaningful change is that it’s been moved and that Twitter’s stopped providing examples.

What does feel like a meaningful change is the new policy’s lack of explicitness in who it’s designed to protect. The old one made it clear right up front: “You may not threaten violence against an individual or a group of people.” (Emphasis mine.) The new policy doesn’t include the words “individual” or “group” and instead chooses to refer to “others.” While that could absolutely be interpreted as protecting marginalized groups, there isn’t anything specific that you can point to that actually proves that.

There are a few more changes worth highlighting: the new policy bans threats against “civilian homes and shelters, or infrastructure” and includes carve-outs for speech related to video games and sporting events, as well as “satire, or artistic expression when the context is expressing a viewpoint rather than instigating actionable violence or harm.”

The company also says that punishment — which usually comes in the form of an immediate, permanent suspension or an account lock that forces you to delete offending content — may be less severe if you’re acting out of “outrage” in a conversation “regarding certain individuals credibly accused of severe violence.” Twitter doesn’t provide an example of what exactly that would look like, but my understanding is that if you were to, say, call for a famous serial killer to be executed, you may not get a permanent ban for it.

I don’t mean that as a critique of Twitter, to be clear. A social network that actually based its moderation policies only on what’s legally permissible would be an utter hellscape that I, and I think most of the population, would have no interest in. I’m not a lawyer, but I don’t see anything about banning bots in the first amendment. (Perhaps that’s because it was written in the 1700s.)



Check Also

How Insomniac Games tried to make a Spider-Man suit for every kind of player

For the art team at Insomniac Games, there’s one job that everyone seems to want: …