Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

A Devil of a Time: How Vulnerable is NTP...

Yarin Perry (The Hebrew University of Jerusalem), Neta Rozen-Schiff (The Hebrew University of Jerusalem), Michael Schapira (The Hebrew University of Jerusalem)

Read More

WATSON: Abstracting Behaviors from Audit Logs via Aggregation of...

Jun Zeng (National University of Singapore), Zheng Leong Chua (Independent Researcher), Yinfang Chen (National University of Singapore), Kaihang Ji (National University of Singapore), Zhenkai Liang (National University of Singapore), Jian Mao (Beihang University)

Read More

PFirewall: Semantics-Aware Customizable Data Flow Control for Smart Home...

Haotian Chi (Temple University), Qiang Zeng (University of South Carolina), Xiaojiang Du (Temple University), Lannan Luo (University of South Carolina)

Read More

Demo #5: Securing Heavy Vehicle Diagnostics

Jeremy Daily, David Nnaji, and Ben Ettlinger (Colorado State University)

Read More