Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors and the properties of Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is designed to affect CAV cameras and perception, undermining traffic sign recognition by inducing misclassification. In this work, we formulate the threat model and requirements for an ILR-based traffic sign perception attack to succeed. We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras. Our black-box optimization methodology allows the attack to achieve up to a 100% attack success rate in indoor, static scenarios and a ≥80.5% attack success rate in our outdoor, moving vehicle scenarios. We find the latest state-of-the-art certifiable defense is ineffective against ILR attacks as it mis-certifies ≥33.5% of cases. To address this, we propose a detection strategy based on the physical properties of IR laser reflections which can detect 96% of ILR attacks.

View More Papers

Beyond the Surface: Uncovering the Unprotected Components of Android...

Hao Zhou (The Hong Kong Polytechnic University), Shuohan Wu (The Hong Kong Polytechnic University), Chenxiong Qian (University of Hong Kong), Xiapu Luo (The Hong Kong Polytechnic University), Haipeng Cai (Washington State University), Chao Zhang (Tsinghua University)

Read More

WIP: Hidden Hub Eavesdropping Attack in Matter-enabled Smart Home...

Song Liao, Jingwen Yan, Long Cheng (Clemson University)

Read More

Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio...

Rui Duan (University of South Florida), Zhe Qu (Central South University), Leah Ding (American University), Yao Liu (University of South Florida), Zhuo Lu (University of South Florida)

Read More

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More