Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Detecting CAN Masquerade Attacks with Signal Clustering Similarity

Pablo Moriano (Oak Ridge National Laboratory), Robert A. Bridges (Oak Ridge National Laboratory) and Michael D. Iannacone (Oak Ridge National Laboratory)

Read More

Evading Voltage-Based Intrusion Detection on Automotive CAN

Rohit Bhatia (Purdue University), Vireshwar Kumar (Indian Institute of Technology Delhi), Khaled Serag (Purdue University), Z. Berkay Celik (Purdue University), Mathias Payer (EPFL), Dongyan Xu (Purdue University)

Read More

Demo #1: Curricular Reinforcement Learning for Robust Policy in...

Yunzhe Tian, Yike Li, Yingxiao Xiang, Wenjia Niu, Endong Tong, and Jiqiang Liu (Beijing Jiaotong University)

Read More

icLibFuzzer: Isolated-context libFuzzer for Improving Fuzzer Comparability

Yu-Chuan Liang, Hsu-Chun Hsiao (National Taiwan University)

Read More