In the wake of the mosque shootings in New Zealand where 49 people were killed, a number of people online are criticizing tech giants like Google, Twitter, and Facebook for allowing hateful content to exist on their platforms.
Four people are in custody following the shooting, where the alleged killer reportedly used a helmet camera to film and livestream the attacks onto Facebook. Footage was later available on YouTube. The video of the attack is still easily found on Twitter as of Friday morning.
As a number of social media sites scrambled to remove the content, people online criticized them for allowing hateful content to fester on the platforms. The response comes because an account, believed to connected to the killer, posted a manifesto filled with white nationalist rhetoric ahead of the attack.
Twitter has especially come under fire for its refusal to take a strong stance on hate speech.
People have long called for CEO Jack Dorsey to #BantheNazis. Today, users reiterated that sentiment.
https://twitter.com/LucianaLamb/status/1106551519615336453
Ban the fucking Nazis, @jack. Comb through your social media platform, find them all, and ban them. All of them. Now. There’s no place for them in a civilization.
— John Phipps: Handsomeness Incarnate (@MagitekDad) March 15, 2019
I don’t give a fuck about your broken hearts, ban the fucking Nazis from your platform once and for all. https://t.co/V9ninyQqcn
— Oz Katerji (@OzKaterji) March 15, 2019
https://twitter.com/nailedsaviour/status/1106491021976719360
This is why people say ban the fucking Nazis. You give them a platform and they use it radicalize each other and livestream murder. Any platform that allows white supremacists to congregate is complicit
— Butt Praxis buttpraxis.bsky.social (@buttpraxis) March 15, 2019
And yet, we continually show how inept we are at finding an answer to hate-fueled gun violence. @twitter and @facebook are complicit as they refused to ban hate speech. They give platforms to Nazis and the alt-right to spew their vitriol. To recruit others to their ranks.
— Tom (@urban_tom) March 15, 2019
Others were critical of the social media platforms response to the videos being shared so much in the wake of the attack.
The spread of the video of the attacks in New Zealand is the latest to be spread using social media. As CNN noted, attacks in Thailand, Denmark, and the United States have all been broadcast on social media.
“While Google, YouTube, Facebook, and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” Lucinda Creighton, a senior adviser at the Counter Extremism Project, told the news outlet.
Creighton added:
“The tech companies basically don’t see this as a priority, they wring their hands, they say this is terrible. But what they’re not doing is preventing this from reappearing.”
New Zealand Police said they are “working to have any footage removed” from sites online where it is being shared.
As CNN reports, Facebook, Twitter, and YouTube–which is owned by Google–struggled to reign the spread of the live-streamed video the shooter took during the shootings. In a statement, Facebook said it took down the video—and the alleged shooter’s account on the site and Instagram—after it was alerted about it by police. The company also said it was removing “any praise or support or the crime and the shooter or shooters as soon as we’re aware.”
“We will continue working directly with New Zealand Police as their response and investigation continues,” the company said in a statement.
Meanwhile, YouTube said it was “working vigilantly to remove any violent footage.” Twitter told the Washington Post it was “proactively working to remove the video content from the service.”
The responses from the social media giants didn’t seem to be enough for many people, who have called out how the attack could be livestreamed and shared so quickly.
Apparently the gunman in New Zealand livestreamed the shooting, completely unfettered. What say you @facebook ? No algorithm to address actual terrorist acts?
— María Peña (@mariauxpen) March 15, 2019
https://twitter.com/samthielman/status/1106531719438581761
Extremism researchers and journalists (including me) warned the company in emails, on the phone, and to employees’ faces after the last terror attack that the next one would show signs of YouTube radicalization again, but the outcome would be worse. I was literally scoffed at. https://t.co/z0OPqfJJw6
— Ben Collins (@oneunderscore__) March 15, 2019
Was the video flagged by Facebook’s technology prior to police notifying Facebook of it?
— Cody Melissa Godwin (@MsCodyGodwin) March 15, 2019
If not – WHY? https://t.co/QEj97jY06x
Critics have long said tech giants have struggled in policing hate speech, hiding behind claims of free speech and their role as platforms and not publishers; this despite their capability to more swiftly radicalize people than previous mediums.
READ MORE: