Tech

Study shows that when emergencies hit, humans will trust robots with their lives

In times of crisis, people implicitly trusted this bot.

Photo of Selena Larson

Selena Larson

Article Lead Image

Imagine being in an enclosed room with a robot. You don’t know anything about the robot, but a sign on it says it’s an emergency guide. Outside the room, smoke begins to billow, and the bot moves in a direction opposite from the exit you know exists. What do you do? 

Featured Video

Do you put your trust in a robot that appears to know what it’s doing, or do you go out the way you entered the room, a main exit you know will take you in the right direction? 

As it turns out, humans will usually place their trust in the robot, even if the robot is wrong. According to a new study from the Georgia Institute of Technology, most people will follow an autonomous machine in an emergency even if it leads them away from the exit. 

In the study, participants were told to follow a robot to a closed room where they would fill out a survey and then continue the experiment. They were not told there would be an emergency, so it came as a surprise when the smoke began to fill the space outside the room. Participants had to choose whether to follow the robot (controlled by researchers), which led them in a wrong direction, or go out the main entrance. All 26 participants followed the robot. 

Advertisement

Further, when the robot exhibited behavior that could be considered buggy, people still trusted it to lead them in the right direction away from the emergency. The bot moved in circles and directed people to the wrong room. But when the emergency happened, people still followed it, assuming it would lead them to safety. Shockingly, when the robot pointed six people to a dark room with an entrance obstructed by furniture, two people went into the room and two others stayed with the robot and did not evacuate. 

“We expected that people would generally not trust a robot if it had made a mistake recently,” Paul Robinette, lead researcher on the study, said in an interview with the Daily Dot. “We didn’t necessarily expect participants to follow it if it had done well in the past. In talking to other robotics researchers, most people did not expect normal people to trust robots just right away.”

Researchers consider this “over trust” in an imperfect robot to be a troubling problem that needs to be addressed through further research that investigates why people implicitly trust the bot even after it demonstrates bad behavior. 

Advertisement

As robots and automated technology become more ubiquitous—from the algorithms that determine the content we see on social networks to driverless cars on highways—trust will play a major role in the types of behaviors robots can execute, as well as the potential problems that could result if trust is manipulated by the people who build the bots. And in environments like the one studied, robots could be programmed to go back into situations to look for survivors, not necessarily lead people out of emergencies. How would humans know that?

“One of the things we want to look at for the over trust issue is: who over trusts the robot?” Robinette said. “In this particular study we had mostly college-age students, mostly associated with Georgia Tech, so we want to make sure that data is still valid with a larger sample size.”

In responding to surveys after the experiment, students said they trusted the robot because it was wearing a sign that said “emergency guide,” and it perceived the robot to be an authority figure. Robinette wants to conduct similar experiments regarding trust, including whether students would still trust it if it wasn’t identified as an emergency bot. 

It’s similar to how users trust Google Maps to give them proper directions, even if they live in the neighborhood might know a shortcut. 

Advertisement

Emergency situations are just one of the ways robots will influence human behavior. And while emergency bots might instill trust during high-anxiety situations, people are much more wary of robots impacting everyday life.

Almost half of U.S. adults believe driverless cars are not safe, and 51 percent wouldn’t be a passenger in one, according to a poll of 1,869 registered voters conducted earlier this year. Despite autonomous cars being relatively safe and the federal government supporting their development, people still trust themselves and other humans over autonomous vehicles. Google’s driverless car was in its first at-fault accident in February. 

Robinette said you might see a difference in trust from people responding to a survey and actually using a driverless car in real life. And unlike autonomous vehicles versus cars we drive now, people have the option to stick with what they know. There were no other options in the case of the emergency bot. 

People have a hard time trusting robots with even relatively simple tasks such as predicting academic success or airline traffic. In one surprising study, researchers found that people trusted humans more than algorithms after seeing both perform predictions, even if the algorithms predicted outcomes significantly more accurately than the humans. Once they saw the computer screw up, no matter how insignificantly, they trusted humans more. 

Advertisement

“One of the questions we’re hoping to approach next is how can robot communicate when it should not be followed at the moment, especially in a noisy and chaotic environment like an emergency,” Robinette said. “So it can’t just necessarily tell people what to do or what not to do. How can it communicate that information?”

Photo via Georgia Tech

 
The Daily Dot