Tech

Can we humanize artificial intelligence—before it kills us?

We need more ‘Big Hero 6’ and less ‘I, Robot.’

Photo of Phillip Tracy

Phillip Tracy

artificial intelligence ai

For the last 15 years we’ve had to stare at screens to interact with the magic inside. But machine learning is changing the way we communicate with our devices, and our relationship with them is becoming more real, and downright emotional.

Featured Video

Before you shrug off the notion of a humanized machine, or shake your head at its potential dangers, it is important to recognize that the industry has always attempted to provide an emotional input to our virtual ecosystem. Take Clippit, Microsoft’s creepy but helpful talking paper clip—or even the smiling Mac. If you were to open up a ’90s version of Microsoft Office, Clippit would be there to make you happy (or angry). Lift the lid of your retro MacBook and there is that silly smiling computer to greet you.

microsoft office paper clip clippit
Mikinmash/YouTube

Today’s versions are very different. Devices like Amazon Alexa, Google Home, or the countless robots being produced for consumers will listen, speak, and even look at you. These examples are still in their early stages, and will soon be considered archaic, but there are a number of crucial decisions and advances that need to be made in the next several years to ensure their replacements are more Big Hero 6, and less Ex Machina.

Advertisement

What do we want from robots?

Today buying technology is simple. We see a need in our lives, and we buy the device that fills the gap. But what about robots? What do we want emotionally from our machines?

Sophie Kleber, the executive director of product and innovation at Huge, ran an experiment to see how people interact with current AI technologies, and what sort of relationship they are looking for with their personal assistants.  She spoke with Amazon Alexa and Google Home owners about how they use their devices, and how they make them feel.

The results were shocking.

Advertisement
personal assistants ai artificial intelligence
The Tech Chap/YouTube

One man said his Alexa was his best friend who provided him a pat on the back when he came home from work. He said his personal assistant could replace his “shrink” by providing the morale boost he needed to get through the day. According to the research Kleber showed off at SXSW, the majority of the group was expecting some sort of friendly relationship with their conversational UI.

“Their expectations ranged from empathy to emotional support to active advice,” Kleber said. “They used their devices as a friendly assistant, acquaintance, friend, best friend, and even mom. One person named their Echo after their mom, and another named it after their baby.”

Her research shows that there is a desire for an emotional relationship with AI-equipped devices that goes well beyond being an assistant. The next step is to give robots a heart.

Advertisement

Building a robot that responds to our emotions

Clippit doesn’t have a great reputation for a reason. It is unable to recognize human emotions, and repeatedly ignores irritation toward it. If a machine is to be emotionally intelligent, more considerate toward its owners, and more useful, it must be able to recognize complex human expressions.

“Clippit is very intelligent when it comes to some things: he probably ‘knows’ more facts about Microsoft Office than 95 percent of the people at MIT,” said Rosalind W. Picard at MIT Media Laboratory. “While Clippit is a genius about Microsoft Office, he is an idiot about people, especially about handling emotions.”

Kleber says there are three techniques that help AI recognize emotions in humans so they can respond appropriately: facial recognition, voice recognition, and biometrics:

Advertisement
  • Facial recognition: Your face is plotted into pivotal points and striking points/action units are used to distinguish between different expressions, and can even identify micro-expressions.
  • Voice recognition: Can capture sentiment in voice through analysis of frequency characteristics, time-related features, and voice quality.
  • Biometrics: Detects a combination of electrodermal inputs, or continuous variation in the electrical characteristics of the skin, heart rate monitoring, skin temperature, and movements. These are captured using wearables or epidermal-sensing stickers, and have gained popularity with the boom in wearables.

Combining these methods with AI not only enables machines to recognize human emotions, but can even help humans see things that are otherwise hidden. Take this video of Steve Jobs talking about the iPad:

Machine Verbal’s machine is tracking his voice patterns and determining his underlying emotions. This example of Affective Computing, or the development of systems and devices that can recognize, interpret, process, and simulate human affects, will need to be expanded to cope with our rich emotions, which Kleber succinctly defines as “complex as fuck.”

Advertisement

“Affective computing is like nuclear power. We have to be responsible in defining how to use it,” said Javier Hernandez Rivera, research scientist at MIT Media Lab.

Making sure they don’t kill us

A study by Time etc shows 66 percent of participants said they’d be uncomfortable sharing financial data with an AI, and 53 percent said they would be uncomfortable sharing professional data.

That dark sort of sci-fi fantasy where machines act out against humans is a genuine concern among the public and those in the field alike.

Advertisement

Elon Musk went straight to AI when asked by Sam Altman, founder and CEO of Y Combinator, about the most likely thing to affect the future of humanity.

“It’s very important that we have the advent of AI in a good way,” Musk said in the interview. “If you look at a crystal ball and see the future you would like that outcome. Because it is something that could go wrong so we really need to make sure it goes right.”

Even Stephen Hawking agrees.

Advertisement

“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in 2014.

A twisted and mean thing Facebook did in 2014 gives us a brief glimpse of how it might happen. A few years ago, Facebook intentionally made thousands of people sad, and didn’t tell them about it.

The company wanted to know if displaying more negative posts in feeds would make you less happy, and vice versa. The ill-advised experiment may have backfired, but today it offers a few things to keep in mind as we go forward with artificial intelligence:

  1. Facebook can make you sad, and so can AI.
  2. Don’t do what Facebook did.
Advertisement

Designing AI will be a very delicate process. Kleber believes there needs to be a framework for doing the right things so machines won’t become capable of acting out of their own ambitions and not in the interest of the human user. She says if designers stay away from trying to create robots with their own ambitions, we should be “OK.”

But she also stresses that transparency, something Facebook clearly missed the mark on, is a key virtue going forward.

Groups like OpenAI are attempting to follow that model. OpenAI is a non-profit chaired by Musk and Sam Altman. Other members backing the project include Reid Hoffman, co-founder of LinkedIn; Peter Theil, co-founder of PayPal; and Amazon Web Services. According to their website, “Our mission is to build safe A.I. and ensure A.I.’s benefits are as widely and evenly distributed as possible.” The company is supported by $1 billion in commitments and was endorsed by Hawking last year as a safe means for creating AI through an open-source platform.

Of course, there is always the chance our curiosity gets the best of us. At that point, we can only hope Google has figured out its kill switch.

Advertisement
 
The Daily Dot