Following a flood of death and rape threats, one man was arrested over abusive remarks on Twitter, leading to calls for a better way to report abuse and hate speech on the platform.
Earlier this week, activist Caroline Criado-Perez received a barrage of rape and death threats on Twitter—reportedly 50 in an hour—after her months-long campaign for women to appear on British banknotes resulted in Jane Austen being featured on new Bank of England £10 notes. A 21-year-old man was arrested in the Manchester area on suspicion of harassment offenses in relation to the abuse, according to the BBC.
The issue prompted public outcry for a button to report abuse on Twitter, with more than 104,000 signing a petition to that effect. Twitter has long grappled with online harassment.
Dozens of soccer players have either been subjected to or used racial slurs. One man tracked down the teen who was sending him vile messages (he was the son of a friend), and had a face-to-face meeting with the abuser.
Perhaps most notably, a Jewish students group also called for a reporting form to more quickly flag tweets breaking French law after a slew of anti-Semitic comments last October.
Twitter answered the call. The company announced a “report abuse” button for individual tweets in its iPhone app, and plans to add it to other parts of the Twitter ecosystem as well. For now, there’s a contact form to report abuse from specific users.
That’s great in theory, but there are two unresolved issues: Where to draw the line between abuse and stinging criticism, and how to prevent misuse of the report abuse tool.
Since Twitter does not actively monitor tweets, it relies on user flagging to identify unseemly, and even threatening, activity. This system is inherently flawed and vulnerable. For proof, look no further that the company’s spam-reporting tool, which is commonly misused by some users to silence political opponents.
Last year, several conservative Twitter users claimed liberals used the spam reporting tool to silence them by having their accounts suspended—ostensibly over a difference of opinion. Search for #TwitterGulag, the hashtag used to discuss the temporary limbo, and you’ll find this is very much still an issue.
Please release @FedUp210 from #TwitterGulag this is not a spam acct. Probably targeted by bots. @Twitter @Support #tgdn #tcot
— FedUpJoe (@FedUpJoe) July 30, 2013
Twitter does review all abuse complaints, but it can take time to get to each one, and thus each affected account remains in limbo until it gets the all-clear or further action is needed.
Similar attacks are going to happen with the report abuse button. It’s a certainty.
The intended purpose of the report abuse tool is to cut down on abusive messages, including those sent by trolls who are simply aiming to evoke an emotional reaction from their targets. On the flipside, trolls will add the report abuse tool to their toolbelts.
They can orchestrate attacks in groups by getting their friends to report a tweet from someone in their crosshairs. They could mark every one of the victim’s tweets as abusive rather than just one. They can log in to multiple Twitter accounts of their own to make multiple reports against the same user. Anything to flag their victims’ accounts in Twitter’s system to garner a suspension before the company figures out what really happened.
The “report” button may not be the most effective method given the mitigating circumstances, but it’ll help. There’s not going to be one single way to block out the abuse; a number of different methods are required to best handle the trolls. The Guardian‘s suggestion of real-name or credit-card registrations is absurd because of privacy concerns, but the paper neatly lays out some of the pros and cons of each anti-abuse method.
Filtering out tweets with certain words is certainly an option, but it is one which would seem a last resort for Twitter on any number of levels. The company has seemingly looked into blocking certain words, but I suspect that’s unlikely to happen (at least in the U.S.) given the company’s commitment to free speech, and the inevitable workarounds that users would develop to skirt the censors.
Clearer, more straightforward methods of reporting abuse can only help clamp down on the invective, despite the likelihood the tool will be misused. Perhaps the bigger question is: In the digital era, what constitutes abuse?
In 2010, a man was convicted of sending a menacing communication after joking about blowing up an airport. It was an overreaction to the airport being closed and Paul Chambers being unable to visit his girlfriend, but it was a clear joke (the U.K.’s High Court eventually agreed and overturned the conviction).
However, a tweeter’s intent is not always obvious. How do you discern between a terrible joke and a genuine threat of a terrorist attack amid a flood of tweets? What’s the difference between an absurd Weird Twitter gag and a genuine thought about causing harm?
More pertinently, how do you figure out the difference between a rape joke and a threat of sexual assault? It’s not always clear, and may result in users being either too over-zealous or blasé about reporting abuse.
On the back of the Criado-Perez case, there are calls to clamp down on “aggressive online atheism” or strong opinions—speech that constitutes more of an aggressive debate than a threat. Taking personal offense when your beliefs are attacked, as the Telegraph‘s Tim Stanley did when making the aforementioned call to block aggressive atheism, is not the same as receiving a promise of physical or sexual harm. Recognizing the difference isn’t always easy when you’re invested in an energetic debate.
Misreporting abuse when only your feelings have been hurt can cause an increased workload for Twitter and law officials, and dump more important concerns into a backlog. It will also damage Twitter’s reputation as a community for people to discuss whatever they damn well please without fear of repercussions (as long as they stick to the largely common-sense rules and laws in their own countries, of course).
Former Supreme Court Justice Oliver Wendell Holmes, Jr. once famously said, “The right to swing my fist ends where the other man’s nose begins.” On Twitter, words can be weapons, and the tools to block them can be shields or used to return fire.
Photo via carolinecp/YouTube