Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

WIP: Infrastructure-Aided Defense for Autonomous Driving Systems: Opportunities and...

Yunpeng Luo (UC Irvine), Ningfei Wang (UC Irvine), Bo Yu (PerceptIn), Shaoshan Liu (PerceptIn) and Qi Alfred Chen (UC Irvine)

Read More

A Devil of a Time: How Vulnerable is NTP...

Yarin Perry (The Hebrew University of Jerusalem), Neta Rozen-Schiff (The Hebrew University of Jerusalem), Michael Schapira (The Hebrew University of Jerusalem)

Read More

SerialDetector: Principled and Practical Exploration of Object Injection Vulnerabilities...

Mikhail Shcherbakov (KTH Royal Institute of Technology), Musard Balliu (KTH Royal Institute of Technology)

Read More

What Remains Uncaught?: Characterizing Sparsely Detected Malicious URLs on...

Sayak Saha Roy, Unique Karanjit, Shirin Nilizadeh (The University of Texas at Arlington)

Read More