Advertisement
Internet Culture

A dislike button could be just what the Internet needs—as long as it doesn’t work

I believe we need a “dislike” option more than the “like” button.

Photo of Nick Douglas

Nick Douglas

Article Lead Image

Back when I blogged as Valleywag, the prime ankle-biter of the startup world, Digg founder Kevin Rose told me that he was considering a secret feature. Users were over-reporting content they didn’t like, misusing the “report as spam” button. So Kevin wanted to add a new option to the dropdown menu of reasons for reporting a post: “Report as lame.” While “report as spam” or “report as wrong category,” when hit by enough users, would automatically move or delete the post, “report as lame” wouldn’t do anything. Its sole purpose was to satisfy the user. 

Featured Video

And I believe we need it more than the “like” button.

Call it a “dislike” button—but not a working one. Facebook repeatedly deletes any third-party app that tries to add one, and Twitter and Tumblr never supported one, for good reason. We’re not lacking for ways to criticize things on the internet. But it’s obvious that the “like” button fulfills only half of our basic desire to not just comment on but be put in charge of everything online.

As writer Paul Ford put it, the “fundamental question” that the web seeks to answer is “Why wasn’t I consulted?” And users will exploit any tool they’re given to try to shape the internet to their tastes. Usually this is appropriate: Facebook and Twitter work because you choose which feeds to follow. Reddit’s voting system makes it a more thorough resource for a wider readership than any single blog, magazine, or newspaper. Wikipedia has silenced all but the most pig-headed critics of crowdsourcing.

Advertisement

But the users’ reach will always exceed their grasp, and they use the provided tools to push their own agendas. Urban Dictionary users define “Jason” as “the act of being the sexiest person alive” and “Meghan” as “a skanky slutty ho born to backwoods retard parents who cannot spell correctly.” Yelp, iTunes, and Amazon reviewers grade their purchases by often insane personal rubrics. Twitter and Tumblr users misuse hashtags to invade each other’s conversations with debate, vandalism, or spam.

And the sites that do have a “dislike” button are half run by it. On Reddit, where downvotes are meant to keep irrelevant and unhelpful posts in check, spammers regularly downvote all competing posts, and “downvote brigades” sometimes sweep in to punish a post or comment for subjectively offensive material, often against the wishes of the moderators and posters in the affected thread.

I run a YouTube channel called Slacktory, which sometimes gets hundreds of comments on a single video. YouTube has “report” buttons, too, with various functionality. Reporting a comment as offensive might flag it, but reporting it as spam will more effectively hide it from all users. Whenever I check what YouTube users have flagged as spam, many comments aren’t spam at all—sometimes they’re offensive, sometimes they’re simply an unpopular opinion. YouTube users have the option to report offensive content as such, but they choose spam because they know the system will prioritize that.

Twitter has also suffered multiple such campaigns (which I can’t link here, lest I perpetuate the cycle of reprisals), where frustrated users bothered by hateful speech on Twitter skipped the “report as abusive” option for the more effective, automated “report as spam” option, and encouraged each other to do the same.

Advertisement

Even when these campaigns have good intentions, they’re breaking a system that has specific purposes, and they normalize a misuse of these buttons that will inevitably spread to squelching unpopular speech. Any time upstanding social media users abuse a content-deletion feature, they hand two kinds of ammo to their opponent: first, the chance to complain to the platform’s owners and paint the original victims as aggressors, and second, the idea to simply fight fire with fire, which will always be more effective for the side with fewer scruples.

For example, the /r/ShitRedditSays community did such a good job publicizing and criticizing hateful Reddit comments (and arguably working as a downvote brigade) that they drove thousands of subscribers to opposing subreddits like /r/mensrights, who have been accused of running their own retaliatory brigades. The war has calcified Reddit as a haven for regressive sociopaths with the occasional ignored progressive outpost, and it’s discouraged progressive users from joining the site and overwhelming the vocal minority.

While this feature abuse is damaging to the system as a whole, it’s a rational choice for an angered user. We can’t eliminate users’ impulse to eradicate any speech they don’t like. And YouTube and Twitter show that providing several forms of “report” or “dislike” isn’t enough.

One partial solution, of course, is to make every reporting process as convenient as the ones that matter to the site’s bottom line. When a user tries to report a tweet as “abusive,” the system dialog warns them, “In order to file a report you must still choose and complete a form. Select this option to continue.”

Advertisement

But we can’t expect every platform to solve this problem; these companies only have so many resources, and they will always be incentivized to shut down certain forms of undesirable content faster than others. And any such handling won’t stop users from trying to eliminate content that is actually desirable.

Until a platform can properly handle all forms of complaints, it needs a stopgap: The broken dislike button. A feature that draws away the most specious complaints where they can’t harm good content.

Of course, a decoy button stops working when the user finds out it’s a decoy. So I’m not recommending a simple placebo crosswalk button. We need a sophisticated solution that convinces users they are making a difference and never reveals that it’s a trap, while crucially not throwing out useful flagging as well. That solution will need to evolve with user behavior—this is not a one-time cure, this is a fight as unending as cryptographer vs. hacker.

That button might already exist. Months ago, I saw a surprising drop-down menu option on Facebook: “This post is unfunny/trying too hard.” Facebook has since replaced this with “It’s annoying or not interesting.” While this option likely affects what Facebook shows a specific user, I strongly suspect it’s also for the user’s peace of mind.

Advertisement

Reddit, which has fought spam invasion for years, famously fudges its upvote/downvote counts a bit to confuse spambots. Reddit mods can also “shadowban” a user—that user thinks they’re still posting and voting, but their activity is invisible to all other users. There could be other unpublicized features lurking on these sites. Hopefully this piece will inspire more.

But aggravatingly, by definition, if any site makes this fantastic fake button, we can never find out. We can have nice things—but only if we don’t know it.

Photo via zeevveez/Flickr (CC BY 2.0)

Follow us on Facebook at Daily Dot Opinion.

 
The Daily Dot