Magdalena Pasternak (University of Florida), Kevin Warren (University of Florida), Daniel Olszewski (University of Florida), Susan Nittrouer (University of Florida), Patrick Traynor (University of Florida), Kevin Butler (University of Florida)

Cochlear implants (CIs) allow deaf and hard-of-hearing individuals to use audio devices, such as phones or voice assistants. However, the advent of increasingly sophisticated synthetic audio (i.e., deepfakes) potentially threatens these users. Yet, this population's susceptibility to such attacks is unclear. In this paper, we perform the first study of the impact of audio deepfakes on CI populations. We examine the use of CI-simulated audio within deepfake detectors. Based on these results, we conduct a user study with 35 CI users and 87 hearing persons (HPs) to determine differences in how CI users perceive deepfake audio. We show that CI users can, similarly to HPs, identify text-to-speech generated deepfakes. Yet, they perform substantially worse for voice conversion deepfake generation algorithms, achieving only 67% correct audio classification. We also evaluate how detection models trained on a CI-simulated audio compare to CI users and investigate if they can effectively act as proxies for CI users. This work begins an investigation into the intersection between adversarial audio and CI users to identify and mitigate threats against this marginalized group.

View More Papers

From Large to Mammoth: A Comparative Evaluation of Large...

Jie Lin (University of Central Florida), David Mohaisen (University of Central Florida)

Read More

Repurposing Neural Networks for Efficient Cryptographic Computation

Xin Jin (The Ohio State University), Shiqing Ma (University of Massachusetts Amherst), Zhiqiang Lin (The Ohio State University)

Read More

EAGLEYE: Exposing Hidden Web Interfaces in IoT Devices via...

Hangtian Liu (Information Engineering University), Lei Zheng (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University), Shuitao Gan (Laboratory for Advanced Computing and Intelligence Engineering), Chao Zhang (Institute for Network Sciences and Cyberspace (INSC), Tsinghua University), Zicong Gao (Information Engineering University), Hongqi Zhang (Henan Key Laboratory of Information Security), Yishun Zeng (Institute for Network Sciences…

Read More

RAIFLE: Reconstruction Attacks on Interaction-based Federated Learning with Adversarial...

Dzung Pham (University of Massachusetts Amherst), Shreyas Kulkarni (University of Massachusetts Amherst), Amir Houmansadr (University of Massachusetts Amherst)

Read More