top of page

Mini Dragon Group (ages 6-7)

Public·61 members
Eli Anderson
Eli Anderson

Silent Attacks Against Voice Assistants __TOP__


A team of academic researchers has tested the phonetic wherewithal of smart-home assistants Amazon Alexa and Google Home, finding it possible to closely mimic legitimate voice commands in order to carry out nefarious actions.




Silent attacks against voice assistants



Prior research also shows that adversaries can generate obfuscated voice commands to spy on users or gather data. DolphinAttack for instance is a method for using completely inaudible ultrasound signals to attack speech-recognition systems and transmit harmful instructions to popular voice assistants like Siri, Google, Cortana, and Alexa. And in November, security firm Armis disclosed that Amazon Echo and Google Home devices are vulnerable to attacks through the over-the-air BlueBorne Bluetooth vulnerability.


As a result, the targeted gadget hears and executes the voice command, opening up all kinds of opportunities for attackers. The researchers were able to successfully reproduce the attack on the most popular voice assistants, including Amazon Alexa, Apple Siri, Google Now, Samsung S Voice, and Microsoft Cortana.


All of that takes things a step beyond what we saw last year, when researchers in China showed that inaudible, ultrasonic transmissions could successfully trigger popular voice assistants like Siri, Alexa, Cortana and the Google Assistant. That method, dubbed "DolphinAttack," required the attacker to be within whisper distance of your phone or smart speaker. New studies conducted since suggest that ultrasonic attacks like that one could be amplified and executed at a distance -- perhaps as far away as 25 feet.


"My assumption is that the malicious people already employ people to do what I do," Carlini told the Times, with the paper adding that, "he was confident that in time he and his colleagues could mount successful adversarial attacks against any smart device system on the market."


So what are the makers of these voice platforms doing to protect people? Good question. None of the companies we've talked to have denied that attacks like these are possible -- and none of them have offered up any specific solutions that would seem capable of stopping them from working. None would say, for instance, whether or not their voice platform was capable of distinguishing between different audio frequencies and then blocking ultrasonic commands above 20kHz. Some, like Apple, declined to comment for this story.


Audio Adversarial Examples:Targeted Attacks on Speech-to-Text [Nicholas Carlini andDavid Wagner/UC Berkeley]AI learns how to fool speech-to-text. That's bad news for voice assistants [Tristan Greene/The Next Web]


The researchers also came up with a "voice masquerading attack." In this case the malicious service waits until the user asks to switch to a different skill. For instance, they could ask "Alexa, capital one" but the rogue skill would still run, whilst impersonating the real Capital One tool. Again, the researchers said this worked on both Google and Amazon, as they tested the attacks on the Home Mini and the Echo Dot.


To add secret recording functionality, the academics used the "reprompt" function. That feature allows a skill to keep running when it doesn't receive a response, as long a notice is issued to users via an audio or text file. The researchers abused this by creating a long, silent audio file as a reprompt so the user wouldn't receive any noticeable warning the mic was still recording. Their rogue skill was able to record for 102 seconds on Alexa and 264 seconds on Google. The researchers said that if the user continued talking, even if they weren't speaking to the home assistants, the recording could continue "indefinitely." Those techniques were similar to those demonstrated in a previous skill-based attack shown by researchers from Israeli firm Checkmarx in April.


They're still skeptical any truly effective protections are currently in place, however. "We know that Amazon and Google couldn't defend against the attacks when we reported our studies to them and we are not sure whether they can do that now," said XiaoFeng Wang from Indiana University.


Voice assistant technology is supposed to make our lives easier, but security experts say it comes with some uniquely invasive risks. Since the beginning of the year, multiple Nest security camera users have reported instances of strangers hacking into and issuing voice commands to Alexa, falsely announcing a North Korean missile attack, and targeting one family by speaking directly to their child, turning up their home thermostat to 90 degrees, and shouting insults. These incidents are alarming, but the potential for silent compromises of voice assistants could be even more damaging.


Device coverings, however, can defend against light command injection if the covering is sufficiently dense. The boffins pointed to Apple's more heavily padded HomePod as an example. Keeping smart speakers out of view from a window can also prevent such attacks, and the need for fairly precise aiming means that mobile devices would be difficult to target even if left exposed and stationary for long periods.


As connected devices such as voice assistants, security cameras, and smart appliances grow in popularity, the homes and offices where they are installed become increasingly filled with a dense web of Wi-Fi signals.


To mitigate the threat, we developed a skill-name scanner and ran it against Amazon and Google skill markets, which leads to the discovery of a large number of Alexa skills at risk and problematic skill names already published, indicating that the attacks might already hap-pen to tens of millions of VPA users. Further we designed and implemented a context-sensitive detector to mitigate the voice masquerading threat, achieving a 95% precision.


The phishing apps follow a slightly different path by responding with an error message that claims the skill or action isn't available in that user's country. They then go silent to give the impression the app is no longer running. After about a minute, the apps use a voice that mimics the ones used by Alexa and Google home to falsely claim a device update is available and prompts the user for a password for it to be installed.


Using this sequence, the voice assistants kept on listening for much longer than usual for further commands. Anything the user says is then automatically transcribed and can be sent directly to the hacker.


"Africa's voice in the UN General Assembly matters, with about 25% of the seats. If there is a strong, united push against Russia, this matters. International norms of sovereignty and territorial integrity are integral to Africa's outlook," Gruzd told CNBC on Saturday.


Lasers can hijack voice assistants in some smartphones and smart speakers, according to a new study by researchers at the University of Michigan and the University of Electro-Communications in Tokyo. The microphones interpret the light as voice commands, leaving them vulnerable under certain circumstances to malicious attacks.


CCADV is the voice against domestic violence across Connecticut. We lead a statewide network focused on advocacy, outreach and education. Our work transforms political, economic and social responses to end domestic violence in Connecticut.


There are other caveats. The silent commands sent through the tabletop still have to sound like your voice to work, but a good machine-learning algorithm, or a skilled human impersonator, can pull that off.


Siri and Google Assistant will audibly talk back to the silent commands, which won't escape attention if there's anyone around to hear them in a quiet room. However, the silent command also told the voice assistants to turn down the volume, so the responses were hard to hear in a loud room but could still be heard by the attacker's hidden microphone.


Previous work on secretly triggering voice assistants have involved sending ultrasonic commands through the air and laser beams focused on smart speakers' microphones from hundreds of feet away. This is the first time anyone has demonstrated an attack through a solid surface, however.


A scientist without a Ph.D. or a Y chromosome or academic affiliation became the most powerful voice of resistance against ruinous public policy mitigated by the self-interest of government and industry, against the hauteur and short-sightedness threatening to destroy this precious pale blue dot which we, along with countless other animals, call home.


About

Welcome to the group! You can connect with other members, ge...

Members

  • Love
    Love
  • Waylon Foster
    Waylon Foster
  • King Zog
    King Zog
  • Van Proft
    Van Proft
  • Muzzi Crack
    Muzzi Crack
bottom of page