In May, an Amazon Echo device arbitrarily recorded the private conversation of a Portland family and sent it to a random person in their contacts list, reigniting concerns over the security flaws of smart speakers.
And to be fair, smart speakers like the Echo and Google Home have had their fair share of nightmare incidents to justify suspicions and mistrust in their reliability. But while installing a smart speaker in your home does come with security tradeoffs, we often misunderstand them, exaggerating the less critical ones while neglecting the more serious risks.
Here’s what you need to know about the security and privacy implications of smart speakers, both the myths and the realities.
1) Smart speakers are always listening
One of the first things you’ll hear about smart speakers like echo is that they’re “always listening” to your conversations—which is technically correct. But we often mistake “always listening” with “always recording.”
“The Echo is not recording everything you say in its presence,” says Lane Thames, a senior security researcher at Tripwire. “Recording does not start until you ‘wake’ it up by saying its wake word, such as ‘Alexa.’”
Unless muted, your speaker does, however, keep a few seconds’ worth of sound recording. “This limited local voice recording is only for the purpose of the device detecting its wake word,” Thames says.
Once you trigger a smart speaker, it starts recording and sending your voice to its servers, where the real processing takes place.
“The bulk of the processing (artificial intelligence and natural language processing) is done in cloud and the voice recordings are made only while the device is in activation mode,“ Thames says. Every device has an indicator to clearly show when it’s recording and sending data to the cloud. For Echo, it’s the light ring on top of the device. For Google Home, it’s the revolving colored lights.
“These recordings are stored so that your Echo device, along with Alexa coupled with your Alexa account, can learn and become ‘smarter’ over time,” Thames says. “This is done via AI and continuous learning algorithms based on your stored voice controls and activities.”
That doesn’t mean that the devices don’t make mistakes. Most of the creepy Amazon Alexa stories that you hear about are the result of the smart speaker waking to the wrong word and misinterpreting bits of conversations for commands. And in some cases, flaws can cause the devices to start recording when not commanded to do so, as one tech blogger found with the Google Home Mini last year.
But this is not the intended functionality of the devices. The companies that develop smart speakers are constantly fixing bugs and patching flaws to prevent their devices from inadvertently recording their users’ voices. Unfortunately for them, every time their devices do something weird, it tends to quickly find its way into the news.
“The recording feature is exaggerated as we only hear about the isolated and case-by-case problems and generalize them to a systemic issue,” says Anubhav Arora, chief architect at Fidelis Cybersecurity.
According to Arora, most devices have settings and features that can help users minimize the possibility of accidental waking and recording. For instance, Echo enables users to change the wake word to avoid confusion in case someone in your home is named Alexa. Some devices, like Google Home, enable users to train their devices to become used to their voice to prevent other people from accidentally (or intentionally) triggering it.
“Using simple discipline to reduce the risk of unintentional recordings (change wake words, use voice training, remembering to turn off if really needed) makes the speakers a secure extension of digital services, just voice-based,” Arora notes.
2) Tech companies keep recordings of your voice
Another concern surrounding smart speakers is the fact that you’re allowing their manufacturers to store your voice recordings. Again, this is a privacy concern, but no bigger than what you’re already dealing with online.
“Voice interactions with smart speakers are an extension of the web activity one does on a browser,” Arora says. “For example, it is possible to do a query in a browser on Google website or through its smart speaker.”
Arora also says that in the case of companies like Google, there’s a likely chance that you’re already entrusting them with your emails, photos, and online documents, which can be a much greater privacy and security risk than storing your voice commands because “emails can contain documents, contracts, and other interactions that voice interactions do not capture.”
READ MORE:
- The best 4 antivirus tools to keep your computer adware free
- How to protect yourself against email spoofing
- How to encrypt an iPhone in seconds
However, this does not mean that you should dismiss the privacy implications of allowing Google or Amazon to store your voice recordings. “Whether it’s an email being stored or a conversation you had with a smart speaker, they both contain information that is tied to you as an individual,” says Gary Davis, chief consumer security evangelist at McAfee.
While most email users understand that their provider is storing their messages, many smart speaker users may not fully understand that voice snippets are being stored when they deliver commands. Equally important, all smart speaker manufacturers enable you to access or delete your voice recordings from their servers. Davis recommends that you routinely review the requests the smart speaker acted on to understand ways it may have been used that you are not aware of.
“Trust is key here as well as risk tolerance,” says Thames, the security researcher from Tripwire. “Users that want to use technology in today’s world must trade off the risk one faces when using a technology versus the benefits gained.”
Mitigating risks will partly hinge on using devices from vendors that follow sound security practices and have earned the trust of users.
“It is impossible to implement systems with perfect security. Good vendors will respond to security issues quickly and transparently,” Thames says, adding that Google and Amazon have done a great job at making their systems secure against cyberattacks.
However, whether those same companies use your voice data for other purposes or let spy agencies access it in their surveillance programs is another question.
3) Smart speakers pose unique threats
Smart speakers have their own set of unique threats that you should be aware of. Like any internet-connected device, if hackers manage to compromise your smart speaker, they’ll be able to exploit it for nefarious purposes. The third-party skills that developers can create for smart speakers are of special concern.
“This is where the functionality of the assistant comes from,” Thames says, “and these backend services from both providers and third-parties are where the true attack surface for these devices are.” Thames adds that while vendors such as Amazon and Google work hard to ensure that third-party applications and skills are vetted for security, no system is perfect and bad apps can make their way in.
Other experts concur. “Bad actors are actively looking at ways to exploit smart speakers,” says McAfee’s Davis, naming “voice squatting” attacks as an example. In voice squatting, hackers develop smart speaker skills that are invoked by commands that sound like those used by legitimate apps. When users voice the command, the malicious skill is launched instead of the real one. After that, the attackers can do other activities such as eavesdropping or phishing.
READ MORE:
- The best free password managers
- What’s the most secure operating system?
- The best free VPN to maintain your privacy online
In another case, researchers at Checkmarx developed a malicious Alexa skill that would continue recording the user’s voice when they weren’t suspecting. This is the equivalent of developing malicious apps and malware for smartphones and computers.
Both Amazon and Google go to great lengths to patch the discovered vulnerabilities and remove malicious skills from their app stores. “It turns out, in this type of market, that the money is made by making good applications and skills, and it is hard and costly to get malware-based or malicious-based skills to become popular,” Thames says.
Another threat that you should be careful about is the unwanted triggering of skills. It’s funny when a Burger King commercial forces your smart speaker to describe its Whopper burger, but not so much when hackers use the same method to force your speaker to perform more critical tasks. For instance, hackers can send you a phishing email and lure you to click on a link to a website that plays an audio file that commands Alexa or Google to make a purchase or unlock the door to your home. The attack is hard to pull, but it can happen. And with enough care, the attackers can modify the audio frequencies so that you don’t hear the command while your speaker does.
To protect yourself this type of attack, the best thing you can do is to limit user access to critical functions such as making purchases by setting PIN codes on them or tying them to a specific voice to prevent arbitrary activation of skills.
Let’s not freak out
Smart speakers are a relatively new technology, and we’re still learning their impact at different levels. They’re a tool among many, and like most technologies, they come with their benefits and tradeoffs. It’s important to understand the security risks—after all, you’re installing an internet-connected microphone in your home—but not exaggerate them.