Advertisement
Tech

Facebook zaps 583 million fake accounts amid ’18 redemption tour

Facebook wants us to know how hard it’s working to keep its platform safe.

Photo of Christina Bonnington

Christina Bonnington

Hand opening Facebook mobile app

Facebook is doing a lot of work to maintain transparency about how its social network operates—or at least a higher degree than it used to.

Featured Video

On Monday, Facebook revealed that it suspended 200 apps over potential data misuse, and in April, it published details about its internal guidelines for enforcing community standards. Now, the company has released its first quarterly moderation report. This report details the actions the company has taken against accounts that violate those community standards.

In its inaugural Community Standards Enforcement Report, Facebook took action on 837 million spam-related activities. It also closed 583 million fake accounts—all in the first three months of 2018.

Part of the reason Facebook was able to take action on such a vast number of accounts is due to how it incorporates machine learning to flag possible violations. At this point, some of its algorithms have gotten quite good at spotting certain kinds of community standards violations. Nearly 100 percent of the spam caught by the network was identified via AI, as was 99.5 percent of fake accounts and terrorist propaganda. Its algorithms were also good at identifying graphic violence and posts that included nudity. Hate speech, however, proved more tricky: Its automated system only flagged 38 percent of hate speech violations during this period before it was reported by users.

Advertisement

Richard Allan, Facebook’s vice president of public policy for Europe, the Middle East, and Africa, told the Guardian that the company is trying to “be as open as we can” about its moderation efforts.

The call for greater transparency on Facebook has been gaining momentum for years, but it’s only recently—since the 2016 election and Cambridge Analytica scandal—that the company has begun to take notable action. Facebook’s quarterly moderation report should help shed a greater light on the steps the company is taking to ensure the social network is populated with legitimate, non-offensive posts, and users. It should also act as a barometer for how well its AI flagging systems are working.

H/T the Guardian

 
The Daily Dot