Advertisement
Tech

Why Microsoft believes that AI bots are the only logical path forward

The future of communication is bots.

Photo of Selena Larson

Selena Larson

Article Lead Image

For Microsoft, the future of communication is bots. The company believes that artificial intelligence will eventually change the way we connect with people, businesses, and our computers. 

Featured Video

At Microsoft’s Build developer conference on Wednesday, the company highlighted a handful of ways its integrating bots into platforms like Skype, and touched on new developments to Microsoft’s personal bot assistant, Cortana. Microsoft also debuted a bot framework that lets developers integrate bots into their applications—similar to the way mobile developers can make apps for iOS or Android, they can now build bots on top of Microsoft’s framework. 

Bots to order pizza, help with your kids’ homework, schedule hotel reservations, these seemingly simple tasks will fall to an army of bots in our phones and computers and chat systems, personal assistants powered by AI, combing through data and applying logic and executing tasks based on our needs. 

Microsoft’s big bot bet comes on the heels of an embarrassing bot debacle that took over Twitter last week. Tay, the AI chatbot meant to communicate like a teenager and learn through conversation with millennials, became a racist, sexist bot in less than 24 hours because the humans she was supposed to be learning from trained her to be. 

Advertisement

Tay went offline quickly, and Microsoft is working on improving the tech so it can defend itself against potential harassment. But the Tay fiasco highlights the challenges building bots; if Tay can be manipulated so quickly, what will a world filled with her brothers and sisters look like? 

“We can think of bots as dumb toddlers. In Tay’s case, she didn’t know better. An Internet hate group told her these horrible racist things was right over and over again, and so she cluelessly parroted it right back to other users. She lacked the reasoning and ‘common sense’ that we have (as humans) to understand that it was wrong,” Alyx Baldwin, cofounder and CTO of personal shopping AI chatbot Kip, said in an email to the Daily Dot. 

“In order to make sophisticated judgements, bots need to learn how to deal with ambiguous situations which requires contemporary data points,” she said. “Ethical bots have to be trained on socially diverse sets of data so that they have an unbiased viewpoint and not just representing the views of one group.”

How to build better bots

During the keynote, Microsoft touched on three key aspects to their AI-making approach: Augment human abilities and experiences, trustworthy, inclusive and respectful. 

Advertisement

The Tay creation lacked these fundamentals, but she wasn’t Microsoft’s first teen bot creation. In China, female teen chatbot Xaioice talks to millions of people across social media platforms without any of the problems Tay encountered on Twitter. Humans are unpredictable, and the company didn’t account for Twitter harassers when they unleashed Tay on the world. 

Advertisement

At the Georgia Institute of Technology, researchers built a system that teaches robots human values and right from wrong, potentially programming them to avoid situations where abuse could transform a bot into something its creator didn’t intend. 

Called Quixote, the AI system teaches “value alignment,” through reading stories and rewarding behavior when the system acts like the protagonist, signaling an understanding of “good” behavior. Mark Riedl, associate professor and director of the Entertainment Intelligence Lab, explains how computers, like children, can learn ethical behavior through storytelling. 

The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature. We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.

Training bots on wide ranges of domain knowledge and implementing filters to recognize abuse, like swear words or words commonly associated with violence or harassment, is already possible and relatively simple for bot creators. For instance, botmaker Darius Kazemi created Wordfilter for this purpose, an open source module for bot creators that filters banned words. 

Advertisement

Botmakers.org provides a list of tools and resources for developers to create ethical bots, and the organization has a Code of Conduct that applies to all bots its more than 700 members create. 

Stefan Bohacek, Botmakers’s organizer, said reaching outside of your usual social and professional circles to other groups can give you a different perspective and help you realize features your bot might need for harassment prevention. 

Baldwin also delineated potential bot safeguards. If a bot detects suicidal feelings it should provide a crisis hotline number, or if the bot detects frustration, it should give the person a way to contact the human behind the bot. 

#TBotT

In 2011, Apple introduced us to the potential for personal assistants that augment our mobile behavior. Siri was not the first interactive bot, though it could be considered a catalyst for companies to make AI helpers universal. Now there’s Microsoft’s Cortana, Google Now, Alexa from Amazon, and Facebook’s bot-human hybrid M, all angling for computerized control of simple behaviors like scheduling and shopping. 

Advertisement

Before we had personal assistants on our phones and before we tweeted at random bots, there was SmarterChild. The AIM bot that kept us company after school and could tell us random facts about sports or the weather, was, for many people (especially young people) the first exposure to bots. Ten years before Siri, SmarterChild introduced natural language comprehension to instant messaging. The bot was developed by ActiveBuddy which eventually became Colloquis.

“Our technology was very heavily based in a pattern-matching algorithm without as much machine learning that is being deployed today in many cases,” Robert Hoffer, cofounder of Colloquis said in an interview with the Daily Dot. 

SmarterChild was the first application with real-time stock market quotes through instant messaging, and had a variety of knowledge domains like baseball, football, and news. But, Hoffer said, the number one knowledge domain was the curse word filter. And around 3 p.m., user activity would significantly increase. 

“The biggest thing people tried to do was to—like Captain Kirk would out-logic the Landru computer on the strange planet and get it into a logical loop that would trick itself into blowing itself up—they would try and Captain Kirk it, and what they were trying to do, they would try and convince the bot to have sex with them,” he said.  

Advertisement

It’s similar to the behavior Microsoft experienced with Tay; a group of users trying to manipulate the bot into saying something it shouldn’t. 

Hoffer said it’s important for bots to have personality, and for companies to invest in the editorial context that provides a level of personality and editorial focus. There is also a challenge, he said, in ambiguous language—bots don’t necessarily understand nuanced conversation. A human might say “I’m doing my makeup,” which actually means “I will be ready in 20 more minutes.” 

Microsoft acquired Colloquis in 2007, and killed the SmarterChild tech. Now, almost ten years later and under different leadership, it’s finally realizing the power of bots. 

“You have to have the courage to allow people to deploy bots that have personalities and you might not like what you get,” Hoffer said.

Advertisement

Why bots are so hot right now

Bots are becoming popular because people are tired of apps. It’s called “app fatigue,” or the idea that the mobile market is saturated with services that don’t necessarily improve upon the mobile Web experience and forces developers to get content and software to people in other ways. 

“Users are tired of downloading apps that don’t take advantage of mobile hardware,” Baldwin said. “Snapchat is valuable because it uses the phone’s video camera while games take advantage of the phones touchscreen, GPU and sensors. When an app is created that doesn’t rely on phone hardware, there’s no real incentive to download it instead of just using the mobile web or chat version.”

Developers, too, are tired of building apps for overcrowded marketplaces, and cross-platform bots are new enough to appeal to users and ensure developers get more exposure. 

Advertisement

Bots will be as manifold as the current mobile app ecosystem. Kip, for instance, is distributed across multiple messaging platforms. Users can tell Kip what they want to buy and Kip automates the process and takes advantage of features through Skype or Slack or other communication system, Baldwin said. 

Other bots, like those Microsoft envisions, will book plane tickets and hotel rooms, teach children about shapes, and act more as personal assistants keeping track of our lives through our phones and computers. 

“We’re running out of humans to do things at a cheap price,” Baldwin said. “As the world population grows, and more people need assistants to augment their lives, we can’t rely on humans to be on call for them. Everyone deserves their own assistant, even human assistants and other caretakers who take on informal labor outside their job scope.”

Microsoft is doubling down on what it calls “conversation as a platform.” The machines we chat with will control our behaviors, going beyond scheduling a reminder or skipping through music. Bots will sift through tons of data, pick out what’s relevant and deliver it to us in an actionable way, essentially performing the grunt work of the Internet. 

Advertisement

Tay was one small experiment that highlighted the problems with releasing AI on the world while underestimating human behavior. And soon, there will be many more Tays to engage with. 

Illustration via Max Fleishman

 
The Daily Dot