Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors and the properties of Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is designed to affect CAV cameras and perception, undermining traffic sign recognition by inducing misclassification. In this work, we formulate the threat model and requirements for an ILR-based traffic sign perception attack to succeed. We evaluate the effectiveness of the ILR attack with real-world experiments against two major traffic sign recognition architectures on four IR-sensitive cameras. Our black-box optimization methodology allows the attack to achieve up to a 100% attack success rate in indoor, static scenarios and a ≥80.5% attack success rate in our outdoor, moving vehicle scenarios. We find the latest state-of-the-art certifiable defense is ineffective against ILR attacks as it mis-certifies ≥33.5% of cases. To address this, we propose a detection strategy based on the physical properties of IR laser reflections which can detect 96% of ILR attacks.

View More Papers

MirageFlow: A New Bandwidth Inflation Attack on Tor

Christoph Sendner (University of Würzburg), Jasper Stang (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Raveen Wijewickrama (University of Texas at San Antonio), Murtuza Jadliwala (University of Texas at San Antonio)

Read More

Facilitating Threat Modeling by Leveraging Large Language Models

Isra Elsharef, Zhen Zeng (University of Wisconsin-Milwaukee), Zhongshu Gu (IBM Research)

Read More

Eavesdropping on Black-box Mobile Devices via Audio Amplifier's EMR

Huiling Chen (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Wenqiang Jin (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Yupeng Hu (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Zhenyu Ning (College of Computer Science and Electronic Engineering, Hunan University, Changsha, China), Kenli Li (College…

Read More

Vision: “AccessFormer”: Feedback-Driven Access Control Policy

Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello (University of Auckland)

Read More