Tech

Top U.S. official wants tech companies to be liable for hosting terrorist content

Will Twitter have to work even harder to ban terrorists?

Photo of Eric Geller

Eric Geller

Article Lead Image

Should social networks be liable if terrorists use their platforms? According to one senior official at the Department of Justice, the answer might be yes.

Featured Video

“We have a new technology, terrorist groups are taking advantage of it, and we need to work together to make sure they don’t cost innocent lives,” John Carlin, Assistant Attorney General for National Security, said at the Aspen Security Forum in Colorado on Thursday. “And if we can’t do that under existing authorities, there’s going to be more and more discussion about what additional authorities do we need.”

Carlin’s comments, first reported by Politico, represent the government’s latest push to prevent extremist groups and lone-wolf jihadists from exploiting sites like Twitter and Facebook for propaganda purposes. The terrorist group ISIS, also known as the Islamic State, has made extensive use of social media to spread its message, and law-enforcement officials have arrested an increasing number of people in the U.S. for expressing support for ISIS or attempting to join the group after connecting to it online.

“Justice Department officials would better serve the American people by targeting terrorists, not technology.”

Advertisement

Tech companies are not currently liable under most circumstances for the material that their users post, thanks to Section 230 of the Communications Decency Act of 1996. The law states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

This freedom from “intermediary liability” was central to the early flourishing of Internet services, including the precursors to modern social networks. Tech companies would have been far less likely to innovate and grow their user base if they thought that letting criminals slip into the mix would expose them to prosecution.

“Nobody’s going to invest in websites and blogs—and, of course, eventually social media—if they’d be held liable for something posted on the site,” Sen. Ron Wyden (D-Ore.), who helped craft the legislation in the House, told the Daily Dot in March.

Importantly, however, Section 230 does not protect companies from federal criminal charges, meaning that the government can already sue them for providing material support to terrorist groups. Assistant Attorney General Carlin did not say whether he considered involuntarily hosting terrorist propaganda to be providing material support, or whether his proposed new authority would expand the range of circumstances in which the government can sue.

Advertisement

Marc Raimondi, a spokesman for the Justice Department’s National Security Division, said that Carlin “was merely explaining that, depending on the facts and evidence, providing technical assistance to a foreign terrorist organization can constitute ‘material support.’”

“The Department is not presently seeking changes to, or elimination of, Section 230 of the Communications Decency Act,” Raimondi said in an email to the Daily Dot.

Asked whether the department wanted new authority to prosecute tech companies for involuntarily hosting terrorist content, Raimondi said, “Not at this time, no.”

Wyden, one of the Senate’s top civil-liberties advocates, said that Carlin’s suggestion reflected the wrong priorities.

Advertisement

“The Executive Branch should know better than to go after neutral communications platforms in the name of national security,” Wyden told the Daily Dot in an emailed statement. “Justice Department officials would better serve the American people by targeting terrorists, not technology.”

He also suggested that targeting platforms instead of people was a misuse of government resources and a break from law-enforcement traditions.

“When the Ku Klux Klan spread terror throughout the South, activists sued those hatemongers into bankruptcy, but no one sued the telephone companies or Postal Service for allowing them to communicate,” he said. “The Executive Branch should focus on the real threats, not on threatening service providers.”

Facebook says that it deletes the extremist material it finds, but the company did not say whether it opposed Carlin’s idea for new authority.

Advertisement

“Our policies on this are crystal clear: We do not permit terrorist groups to use Facebook, and people are not allowed to promote or support these groups on Facebook,” Monika Bickert, Facebook’s head of policy management, said in an emailed statement. “We remove this terrorist content as soon as we become aware of it.”

A Google spokeswoman declined to comment on Carlin’s suggestion of new powers to prosecute tech companies. A spokesman for Twitter and did not respond to a request for comment. A Yahoo spokeswoman directed the Daily Dot to an outside civil-liberties advocate.

Civil-liberties advocates immediately raised concerns about the chilling effect that new prosecutorial authority would have on tech companies’ management of their networks.

“While I understand that Facebook and Twitter already do some content scanning and do cut off accounts under some circumstances, it’s an unworkably high burden on social-media companies to require them to be responsible for the content of every single user,” Rachel Levinson-Waldman, senior counsel for the Liberty and National Security Program at New York University Law School’s Brennan Center for Justice, said in an email to the Daily Dot.

Advertisement

Levinson-Waldman pointed out that tech companies would likely react to this new authority by overcompensating in their censorship of material that did not rise to the level of credible terrorist threats.

“If they are going to be sued based on the worst of the worst, they will cut off (or report) users who are far short of that—who are talking about terrorism, or who are communicating with someone suspected of terrorism, but are not themselves involved in anything nefarious,” she said.

Emma Llanso, director of the Center for Democracy & Technology’s Free Expression Project, said that “creating new legal liabilities for these companies and requiring them to police user content will almost certainly result in overbroad takedown of individuals’ speech.”

New authority to prosecute tech companies for hosting terrorist content would also open the door to removing intermediary liability for less dangerous but still objectionable material, a slippery slope that deeply concerns free-speech advocates.

Advertisement

“We do not permit terrorist groups to use Facebook, and people are not allowed to promote or support these groups on Facebook.”

“This may be about terrorism now,” Levinson-Waldman said, “but it’ll be about something else in the future: crimes that are erroneously classified as terrorism, crimes classified as terrorism that many people might disagree are terrorism, or simply other activities that end up being the targets of government attention.”

“It doesn’t take a lot of imagination,” she added, “to imagine how chilling this will be to First Amendment-protected activities.”

In June 2010, the Supreme Court ruled in Holder v. Humanitarian Law Project that the government could ban a human-rights group from offering legal advice to terrorist organizations in Turkey and Sri Lanka. It remains the only case in which government restrictions on political speech were upheld under a famous legal doctrine known as the Brandenburg test.

Advertisement

David Cole, a Georgetown University Law Center professor who represented the Humanitarian Law Project before the Supreme Court, called the idea of prosecuting companies for hosting terrorist content “very disturbing.”

“I think it is fair to say that the government already has sweeping authority to punish even speech [that] advocates only lawful activities,” he told the Daily Dot in an email. “So I don’t see a case for needing further authority.”

The hypothetical authority described by Assistant Attorney General Carlin would carry with it a host of complications. As with many government surveillance practices, it could encourage companies to move overseas to avoid prosecution. If they remained in the U.S., they might need to constantly monitor their networks for extremist content, something that few of them are equipped to do.

Photo via Fernando Alfonso III

Advertisement
 
The Daily Dot