25 Hits in 0.86 sec


Guoming Zhang, Chen Yan, Xiaoyu Ji, Tianchen Zhang, Taimin Zhang, Wenyuan Xu
2017 Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security - CCS '17  
In this work, we design a completely inaudible attack, DolphinAttack, that modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve inaudibility.  ...  voice command attacks.  ...  The basic principle of DolphinAttack is to inject inaudible voice commands before digitization components.  ... 
doi:10.1145/3133956.3134052 dblp:conf/ccs/ZhangYJZZX17 fatcat:7jko62emp5cwpocr65reynptjy

EarArray: Defending against DolphinAttack via Acoustic Attenuation

Guoming Zhang, Xiaoyu Ji, Xinfeng Li, Gang Qu, Wenyuan Xu
2021 Proceedings 2021 Network and Distributed System Security Symposium   unpublished
DolphinAttacks (i.e., inaudible voice commands) modulate audible voices over ultrasounds to inject malicious commands silently into voice assistants and manipulate controlled systems (e.g., doors or smart  ...  Essentially, inaudible voice commands are modulated on ultrasounds that inherently attenuate faster than the one of audible sounds.  ...  DolphinAttacks modulate malicious voice commands onto ultrasounds and thus create inaudible voice commands.  ... 
doi:10.14722/ndss.2021.24551 fatcat:ozckqbv7mzebvfyse32qliyvdq

GhostTalk: Interactive Attack on Smartphone Voice System Through Power Line [article]

Yuanda Wang, Hanqing Guo, Qiben Yan
2022 arXiv   pre-print
Inaudible voice command injection is one of the most threatening attacks towards voice assistants.  ...  In this paper, we explore a new type of channel, the power line side-channel, to launch the inaudible voice command injection.  ...  DolphinAttack [37] and SurfingAttack [35] both achieve inaudible voice command injection by leveraging the non-linearity of smartphone microphones.  ... 
arXiv:2202.02585v1 fatcat:fte635f3qzgvlk5xuzuapddqw4

Personal Voice Assistant Security and Privacy--A Survey

Peng Cheng, Utz Roedig
2022 Proceedings of the IEEE  
Personal voice assistants (PVAs) are increasingly used as interfaces to digital environments. Voice commands are used to interact with phones, smart homes, or cars.  ...  This survey describes research areas where the threat is relatively well understood but where countermeasures are lacking, for example, in the area of hidden voice commands.  ...  Roy'18 [73] : It builds on DolphinAttack'17 and aims at injecting commands into PVAs. DolphinAttack'17 has a limited attack range of 5 ft (175 cm, roughly 5 ft).  ... 
doi:10.1109/jproc.2022.3153167 fatcat:hxnntjl3lnhabc37bn4y77wwhi

Risks of trusting the physics of sensors

Kevin Fu, Wenyuan Xu
2018 Communications of the ACM  
The DolphinAttack 15 represents a transduction attack vulnerability whereby inaudible sounds can trick speech recognition systems into executing phantom commands.  ...  The DolphinAttack can silently manipulate almost all popular speech recognition systems, such as Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana, Alexa, and the voice-controlled navigation system  ... 
doi:10.1145/3176402 fatcat:bd5pj63mtzhwnfd7kdzucp5rua

Did you hear that? Adversarial Examples Against Automatic Speech Recognition [article]

Moustafa Alzantot, Bharathan Balaji, Mani Srivastava
2018 arXiv   pre-print
DolphinAttack [13] exploits the same non-linearities in microphones to create commands audible to speech assistants but inaudible to humans.  ...  These systems rely on running speech classification model to recognize the user's voice commands.  ... 
arXiv:1801.00554v1 fatcat:cuishe57qfdpdgczwdibqjeqhm

MultiPAD: a Multivariant Partition Based Method for Audio Adversarial Examples Detection

Qingli Guo, Jing Ye, Yu Hu, Guohe Zhang, Xiaowei Li, Huawei Li
2020 IEEE Access  
The performance is evaluated on the Mozilla Common Voice dataset and the LibriSpeech dataset.  ...  Experimental results based on Mozilla Common Voice dataset show that the detection accuracy and AUC value of the model achieve 94.8% and 0.97 respectively, which are 13.5% and 0.08 higher than using the  ...  Leveraging the amplitude modulation technique, DolphinAttack [34] makes the voice commands completely inaudible by modulating them on ultrasonic carriers.  ... 
doi:10.1109/access.2020.2985231 fatcat:ojoraudmpjb4jhhu2jai6ttham

On Sensor Security in the Era of IoT and CPS

Max Panoff, Raj Gautam Dutta, Yaodan Hu, Kaichen Yang, Yier Jin
2021 SN Computer Science  
By modulating a voice command with an ultrasonic carrier, the authors are able to both activate (command: "Hey Siri") and recognize commands ("turn on airplane mode") over 80% of the time even with 75  ...  SN Computer Science Sensor Exploit Attacks DolphinAttack by Zhang et al. uses high-frequency sounds, inaudible to humans, but audible to commonly used microphones due to nonlinearities [69] in the  ... 
doi:10.1007/s42979-020-00423-5 fatcat:n2cecnjrajbphhx6qnuiqsqhwi

CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition [article]

Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, Carl A. Gunter
2018 arXiv   pre-print
Specifically, we find that the voice commands can be stealthily embedded into songs, which, when played, can effectively control the target system through ASR without being noticed.  ...  The impacts of such threats, however, are less clear, since they are either less stealthy (producing noise-like voice commands) or requiring the physical presence of an attack device (using ultrasound)  ...  The recent work DolphinAttack [53] proposed a completely inaudible voice attack by modulating commands on ultrasound carriers and leveraging microphone vulnerabilities to attack.  ... 
arXiv:1801.08535v3 fatcat:6ntzo26bejaldpbxmcgim4y4dq

Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness [article]

Siddique Latif, Rajib Rana, Junaid Qadir
2018 arXiv   pre-print
inaudible voice commands.  ...  DolphinAttack exploits inaudible ultrasounds as adversarial noise to control the victim device inconspicuously but the attack sound was out of the human perception. Similarly, Alzantot et al.  ... 
arXiv:1811.11402v2 fatcat:ykjjg43e2rb7lkbxidv72o7uqq

Alexa versus Alexa: Controlling Smart Speakers by Self-Issuing Voice Commands [article]

Sergio Esposito, Daniele Sgandurra, Giampaolo Bella
2022 arXiv   pre-print
AvA leverages the fact that Alexa running on an Echo device correctly interprets voice commands originated from audio files even when they are played by the device itself -- i.e., it leverages a command  ...  We present Alexa versus Alexa (AvA), a novel attack that leverages audio files containing voice commands and audio reproduction methods in an offensive fashion, to gain control of Amazon Echo devices for  ...  To this end, they leverage inaudible voice commands, as in DolphinAttack [48] . An adversarial attack not leveraging audio files is discussed in the work by Sugawara et al.  ... 
arXiv:2202.08619v1 fatcat:twggcn4zhjb6jda5wd4cajkqwa

SirenAttack: Generating Adversarial Audio for End-to-End Acoustic Systems [article]

Tianyu Du, Shouling Ji, Jinfeng Li, Qinchen Gu, Ting Wang, Raheem Beyah
2019 arXiv   pre-print
We empirically evaluate SirenAttack on a set of state-of-the-art deep learning-based acoustic systems (including speech command recognition, speaker recognition and sound event classification), with results  ...  rate on the IEMOCAP dataset against the ResNet18 model, while the generated adversarial audios are also misinterpreted by multiple popular ASR platforms, including Google Cloud Speech, Microsoft Bing Voice  ...  In [6] , Zhang et al. proposed DolphinAttack, which exploits the non-linearity of the microphones to create commands inaudible to humans while audible to speech assistants.  ... 
arXiv:1901.07846v2 fatcat:4kaqq2ijuvalrequqd6tlt6q4a

Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems [article]

Takeshi Sugawara, Benjamin Cyr, Sara Rampazzi, Daniel Genkin, Kevin Fu
2020 arXiv   pre-print
We then proceed to show how this effect leads to a remote voice-command injection attack on voice-controllable systems.  ...  Next, we show that user authentication on these devices is often lacking, allowing the attacker to use light-injected voice commands to unlock the target's smartlock-protected front doors, open garage  ...  Inaudible Voice Commands. A more recent line of work focuses on completely hiding the voice commands from human listeners. Roy et al.  ... 
arXiv:2006.11946v1 fatcat:hmkbuffquze4basgow76d3eb6e

'Sonic Attacks' on U.S. Diplomats in Cuba

Jolynn Tumolo
2019 The Hearing Journal®  
When a second inaudible ultrasonic source interfered with the primary inaudible ultrasonic source, intermodulation distortion created audible byproducts that share spectral characteristics with audio from  ...  We created a proof of concept eavesdropping device to exfiltrate information by AM modulation over an inaudible ultrasonic carrier.  ...  The DolphinAttack paper [28] uses ultrasound and intermodulation distortion to inject inaudible, fake voice commands into speech recognition systems including Siri, Google Now, Samsung S Voice, Huawei  ... 
doi:10.1097/01.hj.0000557739.97657.c8 fatcat:hybsblrm75ae7mwnsgluhr2tcq

Hands-Free Authentication for Virtual Assistants with Trusted IoT Device and Machine Learning

Victor Takashi Hayashi, Wilson Vicente Ruggiero
2022 Sensors  
Virtual assistants, deployed on smartphone and smart speaker devices, enable hands-free financial transactions by voice commands.  ...  Even though these voice transactions are frictionless for end users, they are susceptible to typical attacks to authentication protocols (e.g., replay).  ...  Inaudible voice commands were recognized by commercial speech recognition systems, such as Siri, Google Now, and Alexa.  ... 
doi:10.3390/s22041325 pmid:35214227 pmcid:PMC8874467 fatcat:6xzkeybjmfat3lcdednt3lkj3q
« Previous Showing results 1 — 15 out of 25 results