Many have worried that Amazon’s Alexa is spying on them through their Echo device. Others, like this Oregon family, merely joked about it — until one day, the voice-activated assistant gave them evidence.
The family said the Echo recorded a conversation they were having without being asked, then sent it to a random person in their contact list — all without ever receiving an explicit command or asking for verbal confirmation. It was only when the contact, an employee of the husband’s, called them about the conversation that the family realized anything was amiss.
Privacy is a sensitive topic these days, with so much personal data being leaked and hacked. Even if people aren’t reciting their Social Security and credit card numbers to Alexa, it’s understandable that critics might feel wary about the always-on nature of smart home speakers.
The unlikeliness of anyone actually listening — say, the government using such devices to spy on citizens, as some have theorized — doesn’t detract from the invasiveness of what these speakers can do — indeed, what they were built to do.
When Smart Isn’t Smart
While it may not be “spying,” exactly, devices such as the Echo (as well as its Google and Apple counterparts, the Google Home and Apple HomePod) are always listening for their wake word, which means they’re always listening.
And, as the Oregon family’s viral story demonstrates, they aren’t always very good at it. Devices routinely mistake ambient noise, such as sound from the TV, lyrics of a song or a snippet of conversation, to be wake words or verbal commands.
One tech columnist from The Washington Post, which reported on the Oregon family’s incident, wrote that on a weekly basis, at least one of his smart speakers makes the not-so-smart move of randomly starting to record things.
He owns an Amazon Echo, Google Home and Apple HomePod. All of them make this mistake.
The writer explains that the Echo uses seven microphones and noise cancellation to listen for its wake word, recording one second of ambient sound at a time to analyze for the wake word before discarding it and analyzing the next second. It only begins recording once it thinks it hears the wake word — but clearly, it isn’t always right about that.
Google, too, had a snafu with its Home Mini last year, discovering that some models were set to record everything and needed to be patched.
Beware the Dolphins
Voice-activated devices in general suffer from a major vulnerability: They can hear things humans can’t. That means that, theoretically, a malicious actor could use ultrasonic commands to activate devices and direct them to visit websites loaded with malware, initiate calls or text messages or take photos.
It’s called a dolphin attack, and the equipment needed to carry one out only costs about $3. Researchers first discovered this was possible in the fall of 2017, and the vulnerability has once again come to the forefront thanks to a recent report by The New York Times.
The reports note that criminals or mischief-makers could also trick devices by embedding commands in songs that can be broadcast or played on streaming platforms. Humans can’t discern the commands, but their devices sure can.
Even if no one is hacking users, and even if no one is actively spying on them, bugs like this raise the red flag for many users, and rightly so. Alexa doesn’t have to report back to criminals or the government to do damage; all she has to do is send a conversation to the wrong contact, and friendships or reputations could be destroyed.
No new technology is perfect, but this one may have needed a little more time to incubate before making its way into the mainstream.