After former intelligence contractor Edward Snowden revealed that the National Security Agency was engaged in a massive online spying operation, the NSA insisted that its surveillance was within the law. But, as Freedom to Tinker points out, given the algorithmic nature of the agency’s work, even they can’t be sure.
As Snowden’s leaks have revealed, the agency taps directly into transcontinental Internet cables, syphons data from telecom companies, attacks encryption schemes for both personal and banking transactions, and has infected tens of thousands of global networks with malware. How, one must wonder, could they possibly be making sure the constitutional rights of U.S. citizens aren’t being violated.
Ostensibly, the Foreign Intelligence Surveillance Court—which was created in 1978—was designed to do just that. The secret court reviews the agency’s spying practices, if they seem in line with the law, then they give the NSA the go ahead. But how, exactly, is the court reviewing the algorithms responsible for these dragnet online surveillance projects?
Put another way, there’s no evidence the secretive FISC is actually reviewing the code behind these digital surveillance programs. Nor is it likely many of the federal judges on the bench of the rotating court would be qualified to do so. Without some sort of technical expert advising the court, it’s difficult to see how meaningful decisions about how to limit the scope of collection software could be made.
So how important is a technically competent voice in the FISC? As a recently revealed U.S. signals intelligence strategy report revealed, the intelligence community appears to be increasingly relying on algorithms for both collection and constitutional compliance.
In a section titled “Goals for 2012-2016,” the report listed two telling objectives. The first is to “build compliance into systems and tools to ensure the workforce operates within the law and without worry.” In other words, it will be up to the software creators to make sure intelligence agents can’t overstep their bounds with a particular surveillance program.
This shift towards self-policing-by-algorithm can be seen again in another intelligence community goal to “build into systems and tools, features that enable and automate end-to-end value-based assessment of SIGINT products and services.”
In one sense, it seems a logical solution. The scope of data collected being generated (and thus collected by intelligence agencies) is growing rapidly: “Digital information created since 2006 grew tenfold, reaching 1.8 exabytes in 2011, a trend projected to continue,” the report projected. And if you are conducting a surveillance operation whose massive scope is only enabled by tools of the digital world, then certainly you aren’t going to be able to police it without those same digital tools.
But in that case, if the agency is going to rely on algorithms for it’s oversight, the FISC should be reviewing them line-by-line (since no one else is allowed to see them).
If nothing else, by not having this check in place the intelligence community seems to be asserting that there are no bugs in their programs—which, considering the rocky kickoff of Healthcare.gov, seems unlikely.
Photo by William Warby/Flickr