This week, a regular who’s who of prospective Bond villains joined together to warn of humanity’s next great threat. Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, and thousands of other academics, researchers, and tech executives signed an open letter warning against what they see as a potential robotics arms race akin to the spread of nuclear warfare in the last century.
Defining autonomous weapons (AW) as any military equipment which can “select and engage targets without human intervention,” the collective warns that “autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
This versatility, claims the letter, could make AW “the Kalashnakovs of tomorrow.” They fear any one country pursuing artificially intelligent weaponry could spark a race between nations, resulting in the illicit spread of highly advanced weaponry through militaries, dictatorships, and terrorist groups. This is why the letter recommends “a ban on offensive autonomous weapons beyond meaningful human control.”
Hawking, Musk, and others want to build a wall between our ever-advancing robotics technology and the individual decision to engage with a human enemy, but as technology and robotics inevitably moves forward, it seems as if the that line might become so thin that we cross it before even our greatest minds can do much about it.
For one thing, the process of automating our military has already begun. Unmanned Aerial Vehicles (UAVs) have been part of the U.S. military since the Vietnam War. While even today’s so-called “drones” are mostly human-operated, they remove soldiers from the battlefields—if not from the decision to kill.
For that, we need to visit the border between North and South Korea, where the latter has deployed armed SRG-1 sentry guns which select targets attempting to cross the Demilitarized Zone. While it does not fire until a command post gives the affirmative, the SRG-1 uses a complex array of sensors and devices to remove much of the brain work from military officials.
Aside from smaller automated tools of warfare, artificial intelligence plays a large role in the strategic defenses of many countries. Israel’s famed “Iron Dome” can automatically identify and eliminate any airborne threat entering the airspace of the Jewish State’s major cities.
In a far more nefarious example, Wired reported in 2009 that Russia was still operating a Soviet-era system known as “Dead Hand,” which would automatically retaliate in the case of any nuclear strike against the largest nation on Earth; it would be an immediate response, free of human interaction.
While even today’s so-called “drones” are mostly human-operated, they remove soldiers from the battlefields—if not from the decision to kill.
Such large-scale systems are not the kind being targeted by the signatories of this week’s letter, but they do represent how hard it would be to resist the pressure of artificial intelligence eventually being utilized for military purposes. AI and robotics is a vastly growing field for the Pentagon and associated research labs like the Defense Advanced Research Projects Agency, or DARPA.
Preventing these scientists from crossing the line into fully automated and lethal weapons systems would seem an impossibility simply because it seems so inevitable—and is likely already happening.
The history of military technology, like most technology, is the slow removal of humans from the equation. The bow-and-arrow puts literal distance between humans and the act of violence, the UAV even more so. Weapons like South Korea’s sentry gun turn potential threats into thermal images, allowing for their elimination with the press of a single button; it doesn’t even require a soldier to aim and shoot.
In many ways, it mirrors the history of the automobile. The first cars rolled onto American streets required complex processes to start up and operate (here’s a five minute tutorial on just turning on a 1924 Model T). Then came a collection of relatively smaller innovations that made operating the car easier—fuel injectors, automatic transmissions, cruise control and parking assistants. Soon, a collection of tech and auto companies promise that you’ll only need to drive your car in bad weather.
One could say the human driver is still in control and continues to be a vital role in the process, but at what point do we decide that too much of the responsibility has been transferred over to the machine?
The advancement of weapons systems have followed no less a linear path towards full automation. Because the distinction between machine and human action is growing ever thinner—until they, one day, become indistinguishable—preventing their merger is all but a fool’s errand.
In the open letter, the signees argue that their efforts are similar to those that culminated in the 1967 Outer Space Treaty, which banned any weapons in orbit or stationed on celestial bodies. When Ronald Reagan announced the Strategic Defense Initiative—better known as “Star Wars”—it was attacked by scientists and international law experts for violating the spirit of the Treaty.
This works fine when the distinction you need to make is between “outer space” and “Earth.” The difference between man and machine—or man and guided machine, or man and semi-automatic machine, or man and automated machine working under supervision of man—is not so distinct.
In the same way that experts agree artificial intelligence and automated systems will take over our workplaces, our homes, and our economy, it would seem an impossible task to prevent them from becoming the future of war.
Gillian Branstetter is a social commentator with a focus on the intersection of technology, security, and politics. Her work has appeared in the Washington Post, Business Insider, Salon, the Week, and xoJane. She attended Pennsylvania State University. Follow her on Twitter @GillBranstetter.
Photo via SurfaceWarriors/Flickr (CC BY SA 2.0)