Hadi Abdullah (Visa Research), Aditya Karlekar (University of Florida), Saurabh Prasad (University of Florida), Muhammad Sajidur Rahman (University of Florida), Logan Blue (University of Florida), Luke A. Bauer (University of Florida), Vincent Bindschaedler (University of Florida), Patrick Traynor (University of Florida)

Audio CAPTCHAs are supposed to provide a strong defense for online resources; however, advances in speech-to-text mechanisms have rendered these defenses ineffective. Audio CAPTCHAs cannot simply be abandoned, as they are specifically named by the W3C as important enablers of accessibility. Accordingly, demonstrably more robust audio CAPTCHAs are important to the future of a secure and accessible Web. We look to recent literature on attacks on speech-to-text systems for inspiration for the construction of robust, principle-driven audio defenses. We begin by comparing 20 recent attack papers, classifying and measuring their suitability to serve as the basis of new "robust to transcription" but "easy for humans to understand" CAPTCHAs. After showing that none of these attacks alone are sufficient, we propose a new mechanism that is both comparatively intelligible (evaluated through a user study) and hard to automatically transcribe (i.e., $P({rm transcription}) = 4 times 10^{-5}$). We also demonstrate that our audio samples have a high probability of being detected as CAPTCHAs when given to speech-to-text systems ($P({rm evasion}) = 1.77 times 10^{-4}$). Finally, we show that our method is robust to WaveGuard, a popular mechanism designed to defeat adversarial examples (and enable ASRs to output the original transcript instead of the adversarial one). We show that our method can break WaveGuard with a 99% success rate. In so doing, we not only demonstrate a CAPTCHA that is approximately four orders of magnitude more difficult to crack, but that such systems can be designed based on the insights gained from attack papers using the differences between the ways that humans and computers process audio.

View More Papers

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder...

Wenjie Qu (Huazhong University of Science and Technology), Jinyuan Jia (University of Illinois Urbana-Champaign), Neil Zhenqiang Gong (Duke University)

Read More

InfoMasker: Preventing Eavesdropping Using Phoneme-Based Noise

Peng Huang (Zhejiang University), Yao Wei (Zhejiang University), Peng Cheng (Zhejiang University), Zhongjie Ba (Zhejiang University), Li Lu (Zhejiang University), Feng Lin (Zhejiang University), Fan Zhang (Zhejiang University), Kui Ren (Zhejiang University)

Read More

Evasion Attacks and Defenses on Smart Home Physical Event...

Muslum Ozgur Ozmen (Purdue University), Ruoyu Song (Purdue University), Habiba Farrukh (Purdue University), Z. Berkay Celik (Purdue University)

Read More

WIP: Augmenting Vehicle Safety With Passive BLE

Noah T. Curran (University of Michigan), Kang G. Shin (University of Michigan), William Hass (Lear Corporation), Lars Wolleschensky (Lear Corporation), Rekha Singoria (Lear Corporation), Isaac Snellgrove (Lear Corporation), Ran Tao (Lear Corporation)

Read More