Advertisement
Tech

How emotion research is pulling robots out of the uncanny valley and into our lives

Emotions may make the uncanny valley a little less deep.

Photo of Selena Larson

Selena Larson

Article Lead Image

Sometimes robots make us feel uneasy because they make us think of death. Now, researchers are putting some real effort behind making our immortal creations feel a bit more alive. 

Featured Video

Humanlike robots can prompt unconscious concerns and thoughts about death in our minds, called “death thought accessibility,” or DTA. It’s these thoughts that contribute to what is known as the uncanny valley, the idea that humanoid robots that closely mirror the look and behavior of people make us feel uncomfortable.

It’s similar to the thoughts evoked in our brains when we look at zombies or clowns because emotional expression is absent.

There are a number of reasons why the uncanny valley exists, ranging from media portrayal of anthropomorphic robots, to the simple fact that they often remind us of our own mortality. And then there’s the idea that the lack of emotions and empathy in humanoid robots serve as a reminder that despite looking like a living thing, they are not one.

Advertisement

It’s similar to the thoughts evoked in our brains when we look at zombies or clowns because emotional expression is absent.

“It’s mostly, we think, because it lacks human nature, it lacks the warmth of emotional expression and empathy,” Dr. Miriam Koschate, psychology lecturer at the University of Exeter and lead researcher of a new study examining emotion and the uncanny valley, told the Daily Dot. “And that makes us be reminded of death. Because that’s what a corpse is. It’s a human being but it lacks warmth and human nature and it can’t react anymore.” 

Koschate and fellow researchers from the psychology department at the University of Exeter and the University of West England’s robotics lab investigated how emotions can affect DTA, and potentially eliminate the uncanny valley. They presented the paper at the Human-Robot Interaction Conference this week.

In order to figure out whether a robot’s creep factor diminishes when emotions are added, researchers tested three humanoid robots and whether they elicit DTA—Honda’s ASIMO, Aldebaran Robotics’ Pepper, and Jules from Hanson Robotics.

Advertisement

To test the emotional response, 95 participants were shown photos of the robots followed by a survey about them. To measure DTA, participants were asked to complete a list of 20 word fragments, six of which could be completed as words associated with death. For instance, COFF(EE) or COFF(IN). 

Researchers found that Jules, the most humanlike robot of them all, elicited the highest DTA scores. Curiously, ASIMO, the least humanlike of the anthropomorphic robots in question, provoked higher DTA than the cartoonish Pepper.

Koschate said this may be due to the fact that Pepper is more familiar-looking than the industrial ASIMO without being as corpse-like as Jules. Pepper can learn your emotions, but not fully show them, and it was created to be a personal assistance bot for the home. 

“Looking at Jules and others, you can see much more is possible in terms of how humanlike they are,” Koschate said. “But you feel that the people creating humanlike robots being rolled out now are really holding back because they don’t want to fall into the uncanny valley. It’s a shame because people create these cartoonish robots rather than really familiar or properly humanlike robots because they’re worried about the uncanny valley.”

Advertisement

Hanson Robotics

Jules was used to test uncanniness among in-person participants to determine whether adding emotional responses to a humanlike robot head would ease the uncomfortableness people experienced when looking at the robot that did not have any reactions. 

Forty-four participants were asked to tell the robot two stories, one about a success, and one about a failure. For some participants, Jules smiled after hearing a successful story, and frowned when someone shared one about failure. In the control group, Jules did not react at all.

Researchers found that the people who experienced Jules smiling or frowning reported lower levels of DTA and uncanniness, compared to those who interacted with the the stone-faced Jules. However, these experiments only reflect how people react to the perception of emotion, not a robot’s actual capacity for feeling.

Advertisement

It’s a shame because people create these cartoonish robots rather than really familiar or properly humanlike robots because they’re worried about the uncanny valley.

While the University of Exeter study demonstrates the appearance of feelings cause lower levels of DTA, previous research indicates that when people are told robots experience emotions without showing them, the feeling of uncanniness actually increases, because people feel like the computers are hiding something. 

“When you express [emotions] or see someone else express emotions, you make the assumption they can feel it,” Koschate said. “It’s hard to think so logically about someone expressing a smile, and you think ‘Oh, they’re just faking it.’ You just believe in [the smile]. In our case, it’s really having to do with the expression of the emotion. It may have to do with making assumptions, but we haven’t tested it, so we don’t know if people make assumptions about how much the robot feels.” 

Behavior would likely impact the perception of eeriness, Koschate said. If a robot looked and acted threatening, like those in The Terminator, DTA would be greater. However, if the robot was a companion and offered hugs or other empathetic gestures, people would be more comfortable around it. 

Advertisement

“Roboticists may have focused more on making them more clever and strong and physical, and neglected the emotions a bit, because emotions aren’t seen as something helpful, [but rather] potentially distracting,” she said. “We think if you want it to be something social, then emotions are key.”

Perhaps it’s because the stereotype of emotion as weakness has infiltrated robotics development, and so the emotion switch hasn’t been flipped on yet, Koschate said. But the study shows it would be good for social roboticists to focus on emotions in the humanoids they build. 

The study suggests that it’s possible to lower the uncanniness of humanoid robots, but Koschate is unsure whether it would it be a good idea to get rid of the uncanny valley entirely. Would that look like Black Mirror‘s “Be Right Back” in which a human gets too attached to her empathetic and almost-human robot? Or would we successfully maintain the separation between human and machine? These are the questions we’ll run into in the future. 

“If we were to eliminate [the uncanny valley] entirely, what would that mean for the relationship between humans and robots, particularly if you add artificial intelligence?” she said. “It’s very tricky, but mostly [our study] has to do with people thinking about it a little more in terms of adding something and thinking about human nature, not just focusing on visuals or intelligence of human nature, but also focusing on other aspects.”

Advertisement

Illustration via Max Fleishman

 
The Daily Dot