Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Censored Planet: An Internet-wide, Longitudinal Censorship Observatory

R. Sundara Raman, P. Shenoy, K. Kohls, and R. Ensafi (University of Michigan)

Read More

On Building the Data-Oblivious Virtual Environment

Tushar Jois (Johns Hopkins University), Hyun Bin Lee, Christopher Fletcher, Carl A. Gunter (University of Illinois at Urbana-Champaign)

Read More

FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

Xiaoyu Cao (Duke University), Minghong Fang (The Ohio State University), Jia Liu (The Ohio State University), Neil Zhenqiang Gong (Duke University)

Read More

Demo #7: Automated Tracking System For LiDAR Spoofing Attacks...

Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan) Best Demo Award Runner-up ($200 cash prize)!

Read More