Guoming Zhang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Xinfeng Li (Zhejiang University), Gang Qu (University of Maryland), Wenyuan Xu (Zhejing University)

DolphinAttacks (i.e., inaudible voice commands) modulate audible voices over ultrasounds to inject malicious commands silently into voice assistants and manipulate controlled systems (e.g., doors or smart speakers). Eliminating DolphinAttacks is challenging if ever possible since it requires to modify the microphone hardware. In this paper, we design EarArray, a lightweight method that can not only detect such attacks but also identify the direction of attackers without requiring any extra hardware or hardware modification. Essentially, inaudible voice commands are modulated on ultrasounds that inherently attenuate faster than the one of audible sounds. By inspecting the command sound signals via the built-in multiple microphones on smart devices, EarArray is able to estimate the attenuation rate and thus detect the attacks. We propose a model of the propagation of audible sounds and ultrasounds from the sound source to a voice assistant, e.g., a smart speaker, and illustrate the underlying principle and its feasibility. We implemented EarArray using two specially-designed microphone arrays and our experiments show that EarArray can detect inaudible voice commands with an accuracy of 99% and recognize the direction of the attackers with an accuracy of 97.89%.

View More Papers

Hunting the Haunter — Efficient Relational Symbolic Execution for...

Lesly-Ann Daniel (CEA, List, France), Sébastien Bardin (CEA, List, France), Tamara Rezk (Inria, France)

Read More

IoTSafe: Enforcing Safety and Security Policy with Real IoT...

Wenbo Ding (Clemson University), Hongxin Hu (University at Buffalo), Long Cheng (Clemson University)

Read More

Emilia: Catching Iago in Legacy Code

Rongzhen Cui (University of Toronto), Lianying Zhao (Carleton University), David Lie (University of Toronto)

Read More