Tech

The controversial algorithm that could scrub the internet of ISIS propaganda

Will a wary Silicon Valley warm to it?

Photo of [email protected]

[email protected]

Article Lead Image


BY SHANE DIXON KAVANAUGH

Featured Video

A new technology—touted as a way to scrub terrorist propaganda from the internet—could be on a collision course with the very social media companies it seeks to serve.

Hany Farid, a computer scientist at Dartmouth College, has developed a software algorithm that he claims will allow platforms like Facebook and Twitter to identify content created by the Islamic State and similar extremist groups and their supporters—and automatically eliminate them from their sites. The software, unveiled last week, is modeled off a successful child-porn detection technology that Farid created several years ago, and which some social media companies already use.

Farid told Vocativ that his new algorithm has been tested “extensively” and is in its final stages. He has teamed up with the Counter Extremist Project (CEP), a non-profit think thank, to create a database of extremist content online that Farid’s technology can then identify and remove from social media platforms.

Advertisement

The new initiative has already received praise from the White House, which continues to struggle over how to curb the spread of jihadist propaganda online. The issue resurfaced again last week when President Obama suggestedthat extremist information disseminated on the internet may have inspired the gunman who shot and killed 49 people at a gay nightclub in Orlando. U.S. officials say that the shooter, Omar Mateen, pledged allegiance to ISIS leader Abu Bakr al-Baghdadi in a Facebook post on the day of the massacre.

But Silicon Valley, wary of the ongoing pressure to aggressively police their platforms, appears reluctant to embrace what Farid and the CEP are pitching. “As soon as governments heard there was a central database of terrorism images, they would all come knocking,” said one unnamed tech industry officer who spoke with the Washington Post.

Social media executives and Silicon Valley insiders interviewed by Foreign Policy and Defense One—also anonymously—echoed similar misgivings about the prospect of companies using the technology to hunt down purported extremism. Some worry that it would be hard, if not impossible, to come up with a neutral definition for terror-inspired content. Others fear a government could misuse the database to target political groups and content it opposes under the guise of fighting terrorism. Google, which owns YouTube, and Twitter have been explicitly skeptical, the Washington Post reported, citing industry officials.

Still, the onset of a technology that promises a wholesale removal of propaganda churned out by ISIS and other terror groups could assert new demands on these companies. Acknowledging that it’s had limited success in combatting Islamist extremism online, the White House and U.S. officials have increasingly pressured tech companies to take on virtual jihadists directly.

Advertisement

Those efforts have, at times, been effective. Twitter announced in February that it had suspended roughly 125,000 accounts linked to ISIS and its supporters, a move that experts believe substantially undercut the terror group’s ability to recruit and radicalize online. Earlier this year, Facebook stepped up its attempts to target users and material it views as supporting terrorism.

But tech companies have also shown that there are limits to what they’re willing to do when terror activity spills into the virtual world. After the ISIS-inspired rampage in San Bernardino, California last year, Apple refused to reprogram shooter Syed Farook’s iPhone so the FBI could bypass its security. That prompted a federal lawsuit against Apple and forced the FBI to hire hackers to crack Farook’s mobile device. Social media companies, meanwhile, have grown accustomed to opposing counter-terrorism legislation in Congress that would force sites like Facebook and Twitter to spy on their users.

To be clear, the new technology developed by Farid and the Counter Extremism Project was not sponsored or funded by any government, nor is its use by companies mandatory. However, governments, including the U.S., would likely play a key role in determining what online content qualifies as extremist, said Mark Wallace, CEP’s chief executive, during a conference call last week with reporters.

The algorithm works by creating a distinct digital signature—what’s called a “hash”—for terror-linked images, video, or audio tracks. That content would then be stored in a database overseen by CEP, Wallace said. The algorithm itself does not determine what content is extremist—and it doesn’t hunt out new terrorist imagery. Instead, companies, government agencies and non-governmental organizations would work with CEP to create a database of terror-inspired media. The technology essentially acts as a go-between—finding items that are in the database on social media sites and expunging them. 

Advertisement

Farid’s new algorithm relies on much of the same technology used forPhotoDNA, the child-porn detection system he created in 2008. PhotoDNA creates a unique hash for the millions of pornographic images of children collected and maintained by the National Center for Missing and Exploited Children. The hash can then identify these photos online, even if they’ve been cropped or manipulated. For his latest project, Farid has largely updated PhotoDNA to to include video and audio files as well as images.

Social media companies have already signaled their concern over whether a consensus could be reached on what constitutes a terrorist image or video—and the implications that such decisions might have on free speech. The algorithm’s creators, however, remain confident about the product. “If you could search out the beheading videos, or the picture of the ISIS fighter all in black carrying the Daesh flag across the sands of Syria, if you could do it with video and audio, you’d be doing something big,” Wallace said. “I believe it’s a game-changer.”

But even some counter-terrorism experts aren’t sold. “This new technology is far from the silver bullet that will tackle extremism online,” Nicholas Glavin, a senior research associate at the U.S. Naval War College, told Vocativ. “Focusing on the supply side of extremist content fails to address the push and pull factors that drive individuals to it in the first place.”

Regardless, the White House is already promoting its potential. “We welcome the launch of initiatives such as the Counter Extremism Project’s…that enable companies to address terrorist activity on their platforms and better respond to the threat posed by terrorists’ activities online,” said Lisa Monaco, Obama’s assistant for homeland security and counterterrorism. “The innovative private sector that created so many technologies our society enjoys today can also help create tools to limit terrorists from abusing these technologies in ways their creators never intended.”

Advertisement
 
The Daily Dot