Yangyong Zhang (Texas A&M University), Lei Xu (Texas A&M University), Abner Mendoza (Texas A&M University), Guangliang Yang (Texas A&M University), Phakpoom Chinprutthiwong (Texas A&M University), Guofei Gu (Texas A&M University)

Popular Voice Assistant (VA) services such as Amazon Alexa and Google Assistant are now rapidly appifying their platforms to allow more flexible and diverse voice-controlled service experience. However, the ubiquitous deployment of VA devices and the increasing number of third-party applications have raised security and privacy concerns. While previous works such as hidden voice attacks mostly examine the problems of VA services’ default Automatic Speech Recognition (ASR)
component, our work analyzes and evaluates the security of the succeeding component after ASR, i.e., Natural Language Understanding (NLU), which performs semantic interpretation (i.e., text-to-intent) after ASR’s acoustic-to-text processing. In particular, we focus on NLU’s Intent Classifier which is used in customizing machine understanding for third-party VA Applications (or vApps). We find that the semantic inconsistency caused by the improper semantic interpretation of an Intent Classifier can create the opportunity of breaching the integrity of vApp processing when attackers delicately leverage some common spoken errors.

In this paper, we design the first linguistic-model-guided fuzzing tool, named LipFuzzer, to assess the security of Intent Classifier and systematically discover potential misinterpretation-prone spoken errors based on vApps’ voice command templates. To guide the fuzzing, we construct adversarial linguistic models with the help of Statistical Relational Learning (SRL) and emerging Natural Language Processing (NLP) techniques. In evaluation, we have successfully verified the effectiveness and accuracy of LipFuzzer. We also use LipFuzzer to evaluate both Amazon Alexa and Google Assistant vApp platforms. We have identified that a large portion of real-world vApps
are vulnerable based on our fuzzing result.

View More Papers

Giving State to the Stateless: Augmenting Trustworthy Computation with...

Gabriel Kaptchuk (Johns Hopkins University), Matthew Green (Johns Hopkins University), Ian Miers (Cornell Tech)

Read More

Geo-locating Drivers: A Study of Sensitive Data Leakage in...

Qingchuan Zhao (The Ohio State University), Chaoshun Zuo (The Ohio State University), Giancarlo Pellegrino (CISPA, Saarland University; Stanford University), Zhiqiang Lin (The Ohio State University)

Read More

Automating Patching of Vulnerable Open-Source Software Versions in Application...

Ruian Duan (Georgia Institute of Technology), Ashish Bijlani (Georgia Institute of Technology), Yang Ji (Georgia Institute of Technology), Omar Alrawi (Georgia Institute of Technology), Yiyuan Xiong (Peking University), Moses Ike (Georgia Institute of Technology), Brendan Saltaformaggio (Georgia Institute of Technology), Wenke Lee (Georgia Institute of Technology)

Read More