Takami Sato (UC Irvine) and Qi Alfred Chen (UC Irvine)

Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the generality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well studied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.

View More Papers

MobFuzz: Adaptive Multi-objective Optimization in Gray-box Fuzzing

Gen Zhang (National University of Defense Technology), Pengfei Wang (National University of Defense Technology), Tai Yue (National University of Defense Technology), Xiangdong Kong (National University of Defense Technology), Shan Huang (National University of Defense Technology), Xu Zhou (National University of Defense Technology), Kai Lu (National University of Defense Technology)

Read More

Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial...

Wei Jia (School of Cyber Science and Engineering, Huazhong University of Science and Technology), Zhaojun Lu (School of Cyber Science and Engineering, Huazhong University of Science and Technology), Haichun Zhang (Huazhong University of Science and Technology), Zhenglin Liu (Huazhong University of Science and Technology), Jie Wang (Shenzhen Kaiyuan Internet Security Co., Ltd), Gang Qu (University…

Read More

HARPO: Learning to Subvert Online Behavioral Advertising

Jiang Zhang (University of Southern California), Konstantinos Psounis (University of Southern California), Muhammad Haroon (University of California, Davis), Zubair Shafiq (University of California, Davis)

Read More

Get a Model! Model Hijacking Attack Against Machine Learning...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More