Sri Hrushikesh Varma Bhupathiraju (University of Florida), Takami Sato (University of California, Irvine), Michael Clifford (Toyota Info Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Connected, autonomous, semi-autonomous, and human-driven vehicles must accurately detect, and adhere, to traffic light signals to ensure safe and efficient traffic flow. Misinterpretation of traffic lights can result in potential safety issues. Recent work demonstrated attacks that projected structured light patterns onto vehicle cameras, causing traffic signal misinterpretation. In this work, we introduce a new physical attack method against traffic light recognition systems that exploits a vulnerability in the physical structure of traffic lights. We observe that when laser light is projected onto traffic lights, it is scattered by reflectors (mirrors) located inside the traffic lights. To a vehicle’s camera, the attacker-injected laser light appears to be a genuine light source, resulting in misclassifications by traffic light recognition models. We show that our methodology can induce misclassifications using both visible and invisible light when the traffic light is operational (on) and not operational (off). We present classification results for three state-of-the-art traffic light recognition models and show that this attack can cause misclassification of both red and green traffic light status. Tested on incandescent traffic lights, our attack can be deployed up to 25 meters from the target traffic light. It reaches an attack success rate of 100% in misclassifying green status, and 86% in misclassifying red status, in a controlled, dynamic scenario.

View More Papers

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

Chengkun Wei (Zhejiang University), Wenlong Meng (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University), Min Chen (CISPA Helmholtz Center for Information Security), Minghu Zhao (Zhejiang University), Wenjing Fang (Ant Group), Lei Wang (Ant Group), Zihui Zhang (Zhejiang University), Wenzhi Chen (Zhejiang University)

Read More

A Preliminary Study on Using Large Language Models in...

Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Jay Ligatti, Larrence Hall (University of South Florida), S. Raj Rajagoppalan (Resideo), Armin Ziaie Tabari (CipherArmor)

Read More

Certificate Transparency Revisited: The Public Inspections on Third-party Monitors

Aozhuo Sun (Institute of Information Engineering, Chinese Academy of Sciences), Jingqiang Lin (School of Cyber Science and Technology, University of Science and Technology of China), Wei Wang (Institute of Information Engineering, Chinese Academy of Sciences), Zeyan Liu (The University of Kansas), Bingyu Li (School of Cyber Science and Technology, Beihang University), Shushang Wen (School of…

Read More