Tech

Who do you blame when a driverless car crashes?

Driverless cars can’t escape hacking and insurance challenges.

Photo of Jonathan Keane

Jonathan Keane

Article Lead Image

Driverless cars are quickly working their way toward reality. Several road tests are penciled in for the coming year, and excitement—as well as ethical concerns—are growing.

Featured Video

As we continue to happily hand over basic functions to technology, there comes a moment to pause and ask: Are we relinquishing too much autonomy of our own in decision making?

That’s the question plaguing autonomous vehicles. Ever since Google threw its hat into the driverless car revolution (and has since become synonymous with it), advocates and opponents have jostled over the benefits and dangers, namely car crashes.

One of the first things that a driving instructor will tell you when you get behind the wheel is that crashes and collisions are often unavoidable. You’ll never be 100 percent safe on the road. Accidents happen; people lose control of the wheel or pedestrians dash out in front of traffic. All you can do is react.  

Advertisement

Proponents of autonomous vehicles believe that driverless cars will drastically change our roads and make them safer. They have a point; after all, 90 percent of road accidents can be blamed on human error.

But driverless cars will need to react with the best interests of the passengers in mind, and there could be unintended consequences.

Will a car decide your fate?

As driverless cars eventually become ubiquitous, we will hand over decision making in a potentially fatal incident to an autonomous system. This brings us to the fresh concerns of crash prevention and more seriously, reducing the consequences of an imminent collision. Can an autonomous car assess risk and decide what life is most worth saving? Maybe. And thus we have a futuristic version of the Trolley Problem.

Advertisement

Patrick Lin, director of the Ethics + Emerging Sciences Group at the California Polytechnic State University, raised these concerns in an opinion piece for Wired last year regarding how vehicles will be programmed to react to oncoming danger or imminent collisions. Lin makes the point that technology can’t be truly unbiased, and while these particular kinds of incidents and crashes will be rare, they can still happen.

“The car is only trying to save itself.”

He uses an example of a potential crash involving three vehicles: your autonomous car, another driver’s Mini, and the third’s large SUV. Should your autonomous car be hurtling toward a collision, it needs to make a split-second decision and swerve left or right, meaning one of these vehicles will be hit. The larger SUV can absorb more shock and damage, so mathematically it may seem like a “safer” option to crash into. However, the SUV is likely to be carrying more people, especially a family, meaning you could be putting children in danger.

On the flip, if you crash into the Mini, it will likely be totaled, injuring or killing its driver or passengers.

Advertisement

So can a car really make that decision? And if it can, should it?

“The short answer in human ethical terms is this: ‘The car is only trying to save itself,’” says Matthew Strebe, CEO of Connetic, who is developing autonomous cybersecurity defense solutions.

“The car knows nothing about any lives. There is no calculation regarding the value of human lives going on in autonomous cars at all,” he says. “The car is not attempting to save its own passengers, or to save pedestrians or the occupants of other vehicles. The car is making only one decision, but it’s making it many thousands of times per second.”

The car won’t necessarily take into account that an approaching car has a young child in it (realistically, how would it know? Just like a human wouldn’t know for sure either). But it will probably be programmed to make the best decision possible to protect itself and its passengers based on the potentially limited information on hand.

Advertisement

Still, this results in several unintended consequences. Does this put the onus on the programmer and the company that made the car? Possibly, and when autonomy is shifted from human to car, it’s a question we’ll have to at least try to answer.

Smarter cars need smarter testing

So far driverless cars have been tested heavily on highways or motorways where the flow of traffic is a little safer compared to city streets. There are fewer variables, which means road testing on city streets will need to be much more intensive and will put these Trolley Problem-like scenarios under the microscope.

Bristol in the U.K. looks set to be a test bed for autonomous vehicles sometime in 2015, with a consortium of academics and engineers planning how best to approach road tests in the city and its surrounding areas. They say they will be “investigating the legal and insurance aspects of driverless cars and exploring how the public react to such vehicles.”

Advertisement

A report from Lloyds’s Insurance says that the same basic risks face connected cars as normal cars but the nature of these risks is very different, namely in transferring risk from human to machine.

“With less reliance on a human driver’s input, however, increased risk would be associated with the car technology itself,” says the report, pointing out how a car can do things like seeing through fog, which a human driver can’t. “However, [the technology] can also fail, and systems are only as good as their designers and programmers. With an increased complexity of hardware and software used in cars, there will also be more that can go wrong.”

The authors of the report suggest that a computing error in cars could have a much more devastating effect in a crash than a human error. “A computer miscalculation or a faulty reading from a sensor could lead a car to do something that a human driver would instinctively realize is inappropriate,” they say. “This could potentially lead to unusual and more complicated types of accidents which are hard to predict the nature of.” Basically, we have an idea of how human drivers screw up, but we don’t know how computer drivers will.

Still, Strebe remains confident in the benefits and that accidents will drop drastically in the future. “Two driverless cars will never hit each other, unless a human has caused an accident that they are both involved in,” he says. “Driverless cars will rarely if ever be found to be ‘at fault’ in an accident.”

Advertisement

Hacking threats and the wild future of car insurance

These aren’t the only concerns around the safety of connected cars and autonomous driving. As it commonly goes with new Internet of Things devices, hacking and cyber threats are also a very real issue.

Just last summer, a group of Chinese students demonstrated how they could compromise a Tesla Model S, opening its doors while driving. If the students were malicious actors, passengers could be at serious risk. In fact, there are some who are entirely assured that this is a matter of when, not if. According to a new report by Sen. Ed Markey’s staff, all modern vehicles are Internet-connected, and the protections in place to prevent hacking are “haphazard” or nonexistent. The report says that while some auto-makers take precautions by encrypting messages the car is sending to its internal technology, others don’t.

“A majority of automakers offer technologies that collect and wirelessly transmit driving history data to data centers, including third-party data centers, and most do not describe effective means to secure the data,” the report states. The senator believes federal regulation is necessary to iron out the various wrinkles here.

Advertisement

“Driverless cars will rarely if ever be found to be ‘at fault’ in an accident.”

And then there are questions over fault. “One of the greatest dangers of driverless cars is a lack of reasonable legal remedies after an accident,” says Michael Gumprecht, an auto accident attorney in Atlanta.

“Currently, the at-fault driver has an insurance policy that can be pursued and negotiated for settlement or a lawsuit if necessary. If they don’t have coverage or they don’t have enough, your own auto policy can stand in their shoes to contribute toward your recovery,” he says. “If human beings are not driving, only the companies who made the vehicle can be pursued in some type of product liability lawsuit, which is far more costly, time-consuming, and potentially fruitless for an accident victim than the current configuration.”

Over the holidays, the California DMV announced that it would in fact miss its new year deadline for enacting regulations for driverless cars, citing safety concerns that remained unaddressed. Google has already tested its car on roads in its home state of California and the DMV’s delay on rules proves that many creases haven’t been ironed out.

Advertisement

The advent of driverless cars is an exciting one; both for advocates and critics alike, but 2015 will likely be the year where both sides will need to start thoroughly addressing these serious concerns for the long term success of the technology. 

Photo via Wikimedia Commons (CC BY 2.0) | Remix by Max Fleishman

 
The Daily Dot