Lea Schönherr (Ruhr University Bochum), Katharina Kohls (Ruhr University Bochum), Steffen Zeiler (Ruhr University Bochum), Thorsten Holz (Ruhr University Bochum), Dorothea Kolossa (Ruhr University Bochum)

Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR), which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of deep neural networks (DNNs) as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output.

In this paper, we introduce a new type of adversarial examples based on *psychoacoustic hiding*. Our attack exploits the characteristics of DNN-based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state-of-the-art speech recognition system *Kaldi* and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten-second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.

View More Papers

Cracking the Wall of Confinement: Understanding and Analyzing Malicious...

Eihal Alowaisheq (Indiana University, King Saud University), Peng Wang (Indiana University), Sumayah Alrwais (King Saud University), Xiaojing Liao (Indiana University), XiaoFeng Wang (Indiana University), Tasneem Alowaisheq (Indiana University, King Saud University), Xianghang Mi (Indiana University), Siyuan Tang (Indiana University), Baojun Liu (Tsinghua University)

Read More

MBeacon: Privacy-Preserving Beacons for DNA Methylation Data

Inken Hagestedt (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Haixu Tang (Indiana University Bloomington), XiaoFeng Wang (Indiana University Bloomington), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More

Please Forget Where I Was Last Summer: The Privacy...

Kostas Drakonakis (FORTH, Greece), Panagiotis Ilia (FORTH, Greece), Sotiris Ioannidis (FORTH, Greece), Jason Polakis (University of Illinois at Chicago, USA)

Read More

Geo-locating Drivers: A Study of Sensitive Data Leakage in...

Qingchuan Zhao (The Ohio State University), Chaoshun Zuo (The Ohio State University), Giancarlo Pellegrino (CISPA, Saarland University; Stanford University), Zhiqiang Lin (The Ohio State University)

Read More