Tech

Can this algorithm save comment sections from the trolls?

Maybe someday soon, we can read the comments.

Photo of AJ Dellinger

AJ Dellinger

Article Lead Image

The Internet turns quickly from mostly civilized society to the wild West as soon as you scroll past the page break and venture bravely into the comment section. But for trolls who get their kicks typing slurs and hate speech, the dystopian future of The Minority Report is about to become reality.

Featured Video

According to the results of an extensive study by researchers from Cornell University and Stanford University, it is possible to identify trolls with impressive accuracy before they do their worst damage.

“It’s not hard to notice that the comment section of major news sites is often plagued by antisocial behavior, like trolling or flaming,” Cristian Danescu-Niculescu-Mizil told the Daily Dot. The assistant professor at Cornell’s Department of Information Science served as one of the authors of the paper Antisocial Behavior in Online Discussion Communities.

“Despite it’s prevalence,” Cristian Danescu-Niculescu-Mizil said, “surprisingly little is known about online antisocial behavior:  Do users take time to become antisocial or is deviant behavior ‘innate’?  Do antisocial users ever redeem themselves and become ‘normal’ community members? Does the community’s reaction to their unwanted actions help them improve, or does it encourage more deviant behavior?”

Advertisement

With funding from Google and commenting data provided by popular comment hosting service Disqus, Danescu-Niculescu-Mizil and coauthors Justin Cheng and Jure Leskovec of Stanford spent 18 months analyzing the posts of users banned from CNN.com, Breitbart.com, and IGN.com in hopes of answering these questions.

Over the course of the study, the researched looked at 10,000 users who would eventually get banned from their communities. Labeled antisocial users or “future banned users” (FBU), their behavior was compared to “never banned users” (NBU) to spot underlying trends that may explain the trollish tendencies.

“Antisocial behavior is simply an extreme deviation from the norms of the community.  Therefore, what is considered antisocial depends on the particular community and on the topic around which the respective community is formed,” Danescu-Niculescu-Mizil explained.

But he noted that the troublemakers on different sites display similarities in how they interact with the community, regardless of what may be the topic of discussion.

Advertisement

“The main insight that renders our approach generalizable across different sites is that instead of relying only on what the users write (which can be quite different from one site to another) it also exploits interaction patterns between users and their community (e.g., how often and where does the user post, how do other users respond).”

“Our findings suggest that some deviant users do actually redeem themselves.”

Though there are some differences from community to community, for the most part, trolls employ similar tactics and behavior regardless of which site they frequent. Nearly all of the 10,000 examined FBUs wrote at a lower readability level than the average user and expressed less positive emotion (take from that what you will). FBUs were also less likely to use conciliatory or tentative language like “perhaps” or “consider,” making their comments read more as incendiary and inflammatory.

The predictability of these users makes identifying them relatively easy. Danescu-Niculescu-Mizil and his fellow researchers found they could ID a troll with 80 percent accuracy within just five to 10 posts. That high-level performance of preemptive troll-spotting remains consistent across domains, meaning it’s capable of catching antisocial users in various communities regardless of content.

Advertisement

The study found that FBUs were able to generate significantly more replies than the average user. Though you aren’t supposed to feed the trolls, the temptation is often too much to resist for some users.

It was also revealed that moderation that employs a quick trigger finger when it comes to censoring often fosters heightened anger and antisocial behavior from users later on in their commenting lives. Even though most trolls find themselves banned after a particularly heated debate, squashing them too early may only make them worse.

While the analysis sheds new light on the behavior of users in online forums and comment sections, it unfortunately doesn’t necessarily apply to the same type of behavior on social media. “We don’t know if these results are directly applicable to Facebook or Twitter, since the nature of the relations between users (and their accountability) are quite different,” Danescu-Niculescu-Mizil said.

As for the online communities, Danescu-Niculescu-Mizil hopes his research results in better online interactions. “We hope that our findings will lead to systems that can assist community moderators and render them more effective, for example by signaling potential troublemakers early on. In particular, we don’t seek to replace the role human moderators have in curating their communities.”

Advertisement

In David Eggers’ The Circle, a novel about a Google-and-Facebook-on-steroids-style tech company, one enterprising character unveils a technology for law enforcement that displays an augmented view from street cameras. It would identify people and highlight suspected criminals and those with prior records in illuminating colors to make them stand out in a crowd. In the story, it’s played as a clear overreach and invasion of privacy. In the world of toxic commenters, it may not be so out of place.

For moderators waiting for that system to come into existence, it’s best to remember that even trolls are people. As Danescu-Niculescu-Mizil points out, “our findings suggest that some deviant users do actually redeem themselves, so an alternative to immediately outcasting them may be to give them a chance to align to community norms.”

Photo via libertygrace0/Flickr (CC BY 2.0)

 
The Daily Dot