Advertisement
Internet Culture

7 reasons Twitter won’t fix its revenge porn problem

If only eradicating revenge porn was as easy as telling people not to post it.

Photo of S.E. Smith

S.E. Smith

Article Lead Image

After a number of nationsstates, and even Reddit taken action, Twitter has finally heard the call: It’s banning revenge porn. Dick Costolo, Twitter’s CEO, recently admitted in a memo that the company “sucks” at managing trolls—including those who post intimate photos without consent—and it’s driving away ordinary users. 2015, the microblogging site promises, will be the year of change, as it rolls out a growing number of policies designed to address harassment, doxing, and now, revenge porn.

Featured Video

But Twitter is never actually going to achieve its stated goal—the problem of revenge porn is too big for the site, and the technical challenges are too great. To protect users, it would need to compromise some of its core values, and as it stands now, the policies around revenge porn don’t really offer much protection.

Here are seven reasons Twitter will fail when it comes to enforcing this laudable addition to site policies:

1) Victims have to know about it to report it

If moderators regularly evaluate images—as, for example, Facebook does in its eternal quest to hide nipples from the world—they can catch suspected revenge porn cases. Instead, Twitter is asking users to actively report, which means they need to locate such images and trace them back to offending users. Aside from the fact that scrutinizing the Internet for intimate images of yourself is hardly a pleasant task, being exposed to the likely harassment and degrading comments associated with them is even more disheartening.

Advertisement

If a user takes down offending images or skips to another account, the victim may have little recourse—and no idea where those images actually ended up. At Gizmodo, Brian Barrett comments that “these security measures as currently implemented are entirely reactive; it puts the onus on the victim to make it right.”

2) Only the subject, or a legal representative, may report revenge porn

One of the most valuable abuse reporting tools is one that allows any user to report abuse, spam, and violations of terms of service. In the case of Twitter, however, the victim or a representative is the only one who can speak out about revenge porn; a friend can’t report an image spotted in her feed, for example, and people can’t form communities to look out for each other. The system of establishing networks to support people under attack from doxers and harassers is a common tactic in feminist and activist communities, where going it alone can feel frightening and impossible.

Sometimes it’s possible to identify revenge porn without linking it easily to a specific victim. The ability to quickly report such images for review enables all users to take an active role in getting them taken down quickly. Without that, users have to painstakingly track down victims and alert them—and no one wants to open their email or go onto Twitter to find a note from a kindly stranger or friend alerting them to the fact that someone is posting offensive images of them. Without this measure, pictures can be up and circulating for hours, making them even harder to take down.

Advertisement

3) There’s an unreasonably high burden of proof

Twitter wants to avoid bogus takedowns, but it’s putting a very high burden of proof on users to address that issue. They need to be able to “verify” their identity, but Twitter defines this in nebulous terms, making the process unclear for users who want to get images taken down quickly. The longer the wait while Twitter staffers look over proof of identity, the longer people have to see images of themselves popping up all over the Internet.

Joseph Bernstein at BuzzFeed discussed his own experiences with “identification” on Twitter. In his case, he was trying to take down a racist account that was impersonating him. “More than 12 hours after people began to report the account as a malicious fake to Twitter,” he wrote, “and more than three hours after I uploaded a government-issued document to Twitter to verify my identity (I’m already verified in that other, less exciting way), there is still an anonymous dweeb using my identity to post racist bile.”

4) Multiple accounts are a snap to create on Twitter

One of the great advantages of Twitter is the ease of account creation; however, this is also one of the great disadvantages of the service. While there are numerous legitimate reasons to have multiple Twitter accounts (personal, professional, organizational, etc.), trolls take advantage of the fact that they can quickly and easily create throwaway accounts. If one is shut down, 15 can appear to replace it in the blink of an eye, distributing the same image over and over again.

Advertisement

This isn’t an easy issue to address. One reason Twitter can be such a powerful and useful organizing tool is that it provides the security of anonymity and protection from hostile governments and harassers. The company might be loathe to give this up, and many users (including, possibly, some of those who experience harassment but support Internet freedom) might defend this vital function as well, but it’s extremely difficult to tackle harassment without effectively requiring users to give up their identities when they register.

5) The boundary of “consent” is too murky

In addition to demanding that users prove their identity, Twitter is also demanding that they demonstrate such images are truly being circulated without consent. Users are faced with the vexing reality that there’s no clear way to provide documentation of consent—short of requiring photo releases for every image on Twitter—or non-consent. It can be difficult to prove.

The site claims that if images have been made publicly available with obvious indicators of consent elsewhere, they don’t count as revenge porn. But what happens if, for example, a feminist blogger poses nude for an art project, giving consent for distribution of the image in that context, and then finds that image being circulated on Twitter with cruel comments about her body? Or discovers that the image is being tweeted at professional contacts or family members? Twitter might handle this through its separate copyright violation policies—though if it’s in the Creative Commons or offered for public use, there are no legal grounds for the site to act upon—but there are too many gray areas when it comes to “consent” and nude images on the Internet.

Advertisement

6) Moderators will face an overwhelming workload

Twitter says it will have a team of an undisclosed number of moderators working around the clock to address reports of revenge porn and act upon them as quickly as possible. The workload here is likely going to be massive, especially with the site’s dual identity/consent requirements, which will require moderators to work their way painstakingly through abuse reports to determine when an account should be locked and whether a user should be banned altogether. As Twitter—and harassment—grows, so too does the moderation load. Moreover, someone who posts revenge porn is likely to do so again, and the moderation policy doesn’t appear to have long-term followup plans to ensure that violators are still behaving appropriately.

This poses a huge enforcement issue, as the site may not be able to keep up. In 2014, Women, Action, and the Media (WAM) even had a brief gig at the Twitter moderating helm. Two women from the feminist organization reviewed complaints and forwarded them to official moderators, tracking the results. The fact that the site was outsourcing its responsibilities to volunteers was a stark and depressing sign of its commitment to moderation.

7) Twitter hasn’t commented on whether it will use technological measures

Photos have fingerprints and so do harassers; in the case of images, it’s easy to track an image if it keeps popping up. Twitter could implement an algorithm to automatically remove a picture once moderators have identified it as revenge porn, but the company hasn’t made a statement on whether it will use such technology. Twitter could also be using IP addresses to identify chronic violators and ban them—barring them from new account creation and even blacklisting access altogether. It would be a crude fix, as such users could hop to another location or spoof their IPs, but it would be a start.

Advertisement

To indicate that it really wants to take harassment seriously, Twitter needs to demonstrate that it’s using, will be using, or plans to develop the obvious tech needed to enforce the revenge porn ban. Moderators alone won’t fix the problem, and when possible, their work should be augmented by automated systems that can work more efficiently and easily than they can.

Twitter’s desire to fix spam, harassment, and other problems seems genuine, but it’s not going to succeed with this approach, and it needs to demonstrate that it has more in the bag to convince users to stay.

Photo via rycheme/Flickr (CC BY-ND 2.0)

 
The Daily Dot