Internet Culture

You might feel a robot’s pain—but it doesn’t feel yours

The future of empathy just got wired.

Photo of Gillian Branstetter

Gillian Branstetter

Article Lead Image

A new study published in the journal Scientific Reports might finally explain why you keep crying at the end of Wall-E. The study, conducted by five researchers across Germany and Japan, showed subjects images of pain being inflicted on two hands—one robotic and one human. Analyzing a chain of neural activities which create empathy in the human brain, the researchers found that humans responded with more initial empathy to the robotic hand being cut with a knife—even though respondents felt more empathy for the human hand when no pain was being inflicted.

Featured Video

As robots become increasingly common parts of public life, the study raises a number of important questions. In interviews conducted by the Pew Research Center last year, experts in robotics and artificial intelligence agreed that “the penetration of robotics and AI will be close to 100 percent” by 2025. Robots are already integrated into dozens of job sectors—like manufacturing and the service industry—and countries with low birth rates like Japan and Taiwan are frequently counting on robots to take care of aging populations.

If we feel empathy for our creations, however, how quick will we be to rely on them at the expense of their own suffering? This kind of talk might seem absurdly sci-fi, but experts like Stephen Hawking, Bill Gates, and even Donald Trump agree we need to consider the ethical dilemmas presented by AI before the technology exists, not after. We must consider the possibility of a robot’s pain—and whether we should share in its suffering—before too much damage is caused to either of us.

Robots are already integrated into dozens of job sectors—like manufacturing and the service industry.

Advertisement

The matter of whether a machine can feel pain is largely a question of whether machines can think at all. René Descartes, in his 1637 tract Discourse on the Method, ponders whether signals of pain are true signs of intelligence within machines; he writes, “[F]or we can so conceive of a machine so constructed that it utters words, and even utters words which correspond to bodily actions causing a change in its organs (e.g. if you touch it in one spot it asks what you want of it, if you touch it in another it cries out that you are hurting it).”

In the 20th century, British mathematician Alan Turing’s paper, “Computing Machinery and Intelligence,” would popularize the concept of artificial intelligence and whether these hypothetical automatons could experience pain or pleasure. British neurologist Geoffrey Jefferson argued in 1949 that machines cannot think original thoughts and, therefore, lack emotion. “No mechanism could feel pleasure at its successes,” argues Jefferson, “grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.” Philosopher Daniel Dennett argued in 1978 that any experience, including that of pain, could largely not be “felt” by computers because the entire system would merely be a simulation.

Against this skepticism, Abraham Sapien-Cordoba, Associate Professor of Philosophy at the University of Glasgow and director of the The Value of Suffering project, believes that robots who can do everything a human can will absolutely be capable of feeling pain. “While we don’t have any existent machines that can feel pain,” writes Cordoba, “computers can now be programmed to reproduce human behaviors that once appeared impossible, such as playing chess and solving all kind of mathematical problems. So, is there any reason why they couldn’t, in principle, have experiences?”

Advertisement

In the last decade, researchers have seriously begun to conquer the hardware problem of making a computer physically “feel” something. In 2006, University of Nebraska researchers showed off a robotic hand that can experience tactile sensations to the point that it identifies the numbers on a coin. 2007 saw the introduction of a dental training robot which reacts as a human would to pricked gums or sore teeth. In 2009, DARPA unveiled artificial skin designed to give robots the ability to sense pressure and temperature.

Here, we could engage in an ontological debate about the difference between simulated pain and real pain: Are the neurons and synapses that make up my physical pain meaningfully different from the wires, sensors, and code which make a robot react to “pain?” The distinction is important, because it could draw a mental line in how we treat machines, humans, and other creatures.

René Descartes, in his 1637 tract Discourse on the Method, ponders whether signals of pain are true signs of intelligence within machines.

Advertisement

Such a divide is what the researchers of this paper have tried to unpack. By the authors’ own admission, the spike in empathy subjects experienced at seeing the robot hand hurt is likely due to its humanoid nature. “Because the basic shape of the robot hand in the present study was the same as that of the human hand, the human participants may have been able to empathize with the robot hand,” the paper reads. “It is thus necessary to test whether a robot hand in very different shape (e.g., a robot hand without fingers) can elicit similar empathic responses in a future study.”

The first artificially intelligent machines will likely resemble a bank of computers more than, say, C3PO. But machines can appear human in more than just physical traits–voice simulation has gone a long way in making people connect with computers, and even chatbots have proven to be curious outlets for human emotion. Think of Samantha, the sentient operating system from the movie Her who imitates the passion of a lover, despite having no physical presence. Or even R2D2—just a few clicks, beeps, and whirs create a charming and heroic character whom millions love.

https://www.youtube.com/watch?v=WzV6mXIOVl4

This extends outside of mechanical beings, as well. For centuries, and up until surprisingly recently, scientists believed many animals cannot feel pain. While today it is generally understood animals feel pain, veterinarians were trained as late as 1989 to ignore animal’s cries during operations. If that isn’t cruel enough, consider this: By 1987, it was still commonplace for pediatric surgeons to not use anesthesia while working on newborn babies because some still believed the young humans were too “primitive” to experience pain.

Advertisement

The spike in empathy subjects experienced at seeing the robot hand hurt is likely due to its humanoid nature.

Our perception of what can and cannot experience pain and, therefore, deserves our empathy is still growing. One of the great paradoxes of AI research is the vast intelligence of a computerized brain but the lacking development in any believable or sturdy physical form. It’s easier to make a machine that can run a restaurant than it is to make one that can be a busboy. While this study reveals we base our empathy for machines on the relative “humanness” of its appearance, it’s quite possible we’ll develop machines that can feel pain—or even sorrow—long before we develop anything that reliably looks like us.

As many warn us, however, it might be less important that we consider the feelings of machines and more important that we design them to consider ours

Gillian Branstetter is a social commentator with a focus on the intersection of technology, security, and politics. Her work has appeared in the Washington Post, Business Insider, Salon, the Week, and xoJane. She attended Pennsylvania State University. Follow her on Twitter @GillBranstetter

Advertisement
 
The Daily Dot