Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, according to the New York Times.
For the last two years, the researchers have figured out how to activate these devices to dial phone numbers and open websites—causing many to worry that it may soon be possible for malicious users to unlock doors to homes, take money out of bank accounts, or simply buy products online. For watchers of Josie and the Pussycats, it could spark concern about subliminal messaging, as well.
At the University of California, Berkeley, and Georgetown University, groups of research students in 2016 displayed their ability to hide commands in white noise played over speakers and in YouTube videos to trick smart devices to turn on airplane mode or open a website. Now, the newspaper reports that Berkeley students have published a research paper that says they can successfully embed commands into recordings of music—so while you listen to your favorite newest single, Alexa hears an instruction to purchase something from Amazon.
“We wanted to see if we could make it even more stealthy,” Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, told the Times.
Meanwhile, researchers at Princeton University and China’s Zhejiang University in 2016 demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear, using a technique called the “DolphinAttack.” The attack first mutes the phone so the owner can’t hear what’s going on and then instructs the device to visit malicious websites, initiate phone calls, take a picture, or send text messages. This year, another group of researchers successfully sent voice-activated commands embedded in songs.
Right now, there are no laws that regulate subliminal messaging to Artificial Intelligence—or people, for that matter—which could become problematic as these technologies become more complex and smart devices race to outnumber people by 2021, according to the research firm Ovum. More than half of all American households will have at least one smart speaker by then, according to Juniper Research.
Some readers were concerned about these advances in subliminal messaging, and many tweeted their concerns, including that the research felt chillingly like the dystopian novel 1984.
https://twitter.com/TVietor08/status/994739944995278848
Good morning from the dystopia https://t.co/YRvJOhHn8m
— erin mccann | (@mccanner) May 10, 2018
https://twitter.com/JDiviv/status/995018529727533056
hackers can now send literal dogwhistles via radio signal to siri or alexa to open incriminating websites or wire money out of your account+all I can think about beside the fact we live in hell is the extent to which the stasi would’ve played god with this https://t.co/FDZmAkMuTU
— csz (@cszabla) May 11, 2018
All three corporations—Amazon, Google, and Apple—ensured the Times that their devices are safe from intruding forces.
Both Google and Amazon’s assistants use voice recognition to prevent devices from acting on certain commands unless they recognize the user’s voice—which has been proven to be easy to fool. Apple said its smart speaker, HomePod, cannot unlock doors and said that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Now when choosing a smart speaker for your home, users will have to consider which device is the least likely to get hacked by outside forces, on top of researching which devices do the best job at the tasks needed.