Advertisement
Tech

Instagram included in Facebook transparency report for the first time

The company reveals how prevalent child exploitation, drug and firearm sales, self-harm, and terrorist content is on Instagram.

Photo of Mikael Thalen

Mikael Thalen

Article Lead Image
Piqsels (Public Domain)

For the first time on Wednesday, Instagram was including in Facebook’s quarterly transparency report.

Featured Video

In a blog post from the company, Guy Rosen, Facebook’s vice president of integrity, outlines how the tech giant works to enforce its rules in regards to four specific policy areas.

“In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda,” the company states.

In terms of self-harm, Rosen says Instagram was successful in removing roughly 835,000 pieces of content in the second quarter, 77.8 percent of which was detected by the company proactively. In the third quarter, Instagram removed approximately 845,000 pieces of self-harm content, 79.1 percent which was detected proactively.

Advertisement

As far as the company’s policy on terrorism, Rosen states that Facebook was able to detect 98.5 percent of terrorist-related content on Facebook and just 92.2 percent on Instagram. Facebook says it “will continue to invest in automated techniques to combat terrorist content” in hopes of bringing its percentages even higher.

Facebook also added that it has made progress tackling child exploitation on Instagram, with 512,000 instances removed in the second quarter and an additional 745,000 pieces of content in the third quarter.

Instagram likewise took down 1.5 million pieces of content related to drug sales while around 58,600 posts related to firearm sales were removed as well.

Neither Instagram or Facebook released any statistics on fake accounts or hate speech, however, despite increased attention on the issues as the 2020 election draws near.

Advertisement

The company did state though that it is employing new tactics in order to crack down on hate speech before it is able to spread.

“Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate,” the blog post notes.

READ MORE: 

Advertisement

H/T The Verge

 
The Daily Dot