Qiben Yan (Michigan State University), Kehai Liu (Chinese Academy of Sciences), Qin Zhou (University of Nebraska-Lincoln), Hanqing Guo (Michigan State University), Ning Zhang (Washington University in St. Louis)

With recent advances in artificial intelligence and natural language processing, voice has become a primary method for human-computer interaction, which has enabled game-changing new technologies in both commercial sector such as Siri, Alexa or Google Assistant and the military sector in voice-controlled naval warships. Recently, researchers have demonstrated that these voice assistant systems are susceptible to signal injection from voice commands at the inaudible frequency. To date, most of the existing work focus primarily on delivering a single command via line-of-sight ultrasound speaker and extending the range of this attack via speaker array. However, sound waves also propagate through other materials where vibration is possible. In this work, we aim to understand the characteristics of this new genre of attack in the context of different transmission media. Furthermore, by leveraging the unique properties of acoustic transmission in solid materials, we design a new attack called SurfingAttack that will allow multiple rounds of interactions with the voice-controlled device over a longer distance and without the need to be in line-of-sight, resulting in minimal change to the physical environment. This has greatly elevated the potential risk of inaudible sound attack, enabling many new attack scenarios, such as hijacking a mobile Short Message Service (SMS) passcode, making ghost fraud calls without owners' knowledge, etc. To accomplish SurfingAttack, we have solved several major challenges. First, the signal has been specially designed to allow omni-directional transmission for performing effective attacks over a solid medium. Second, the new attack enables two-way communication without alerting the legitimate user at the scene, which is challenging since the device is designed to interact with human in physical proximity rather than sensors. To mitigate this newly discovered threat, we also provide discussions and experimental results on potential countermeasures to defend against this new threat.

View More Papers

When Malware is Packin' Heat; Limits of Machine Learning...

Hojjat Aghakhani (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Francesco Mecca (Università degli Studi di Torino), Martina Lindorfer (TU Wien), Stefano Ortolani (Lastline Inc.), Davide Balzarotti (Eurecom), Giovanni Vigna (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara)

Read More

Deceptive Previews: A Study of the Link Preview Trustworthiness...

Giada Stivala (CISPA Helmholtz Center for Information Security), Giancarlo Pellegrino (CISPA Helmholtz Center for Information Security)

Read More

Custos: Practical Tamper-Evident Auditing of Operating Systems Using Trusted...

Riccardo Paccagnella (University of Illinois at Urbana–Champaign), Pubali Datta (University of Illinois at Urbana–Champaign), Wajih Ul Hassan (University of Illinois at Urbana–Champaign), Adam Bates (University of Illinois at Urbana–Champaign), Christopher W. Fletcher (University of Illinois at Urbana–Champaign), Andrew Miller (University of Illinois at Urbana–Champaign), Dave Tian (Purdue University)

Read More

Towards Plausible Graph Anonymization

Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (armasuisse Science and Technology), Bartlomiej Surma (CISPA Helmholtz Center for Information Security), Praveen Manoharan (CISPA Helmholtz Center for Information Security), Jilles Vreeken (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More