Tao Chen (City University of Hong Kong), Longfei Shangguan (Microsoft), Zhenjiang Li (City University of Hong Kong), Kyle Jamieson (Princeton University)

This paper presents Metamorph, a system that generates imperceptible audio that can survive over-the-air transmission to attack the neural network of a speech recognition system. The key challenge stems from how to ensure the added perturbation of the original audio in advance at the sender side is immune to unknown signal distortions during the transmission process. Our empirical study reveals that signal distortion is mainly due to device and channel frequency selectivity but with different characteristics. This brings a chance to capture and further pre-code this impact to generate adversarial examples that are robust to the over-the-air transmission. We leverage this opportunity in Metamorph and obtain an initial perturbation that captures the core distortion's impact from only a small set of prior measurements, and then take advantage of a domain adaptation algorithm to refine the perturbation to further improve the attack distance and reliability. Moreover, we consider also reducing human perceptibility of the added perturbation. Evaluation achieves a high attack success rate (95%) over the attack distance of up to 6 m. Within a moderate distance, e.g., 3 m, Metamorph maintains a high success rate (98%), yet can be further adapted to largely improve the audio quality, confirmed by a human perceptibility study.

View More Papers

Dynamic Searchable Encryption with Small Client Storage

Ioannis Demertzis (University of Maryland), Javad Ghareh Chamani (Hong Kong University of Science and Technology & Sharif University of Technology), Dimitrios Papadopoulos (Hong Kong University of Science and Technology), Charalampos Papamanthou (University of Maryland)

Read More

Detecting Probe-resistant Proxies

Sergey Frolov (University of Colorado Boulder), Jack Wampler (University of Colorado Boulder), Eric Wustrow (University of Colorado Boulder)

Read More

CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples

Honggang Yu (University of Florida), Kaichen Yang (University of Florida), Teng Zhang (University of Central Florida), Yun-Yun Tsai (National Tsing Hua University), Tsung-Yi Ho (National Tsing Hua University), Yier Jin (University of Florida)

Read More

Cross-Origin State Inference (COSI) Attacks: Leaking Web Site States...

Avinash Sudhodanan (IMDEA Software Institute), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Juan Caballero (IMDEA Software Institute)

Read More