Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

OblivSketch: Oblivious Network Measurement as a Cloud Service

Shangqi Lai (Monash University), Xingliang Yuan (Monash University), Joseph K. Liu (Monash University), Xun Yi (RMIT University), Qi Li (Tsinghua University), Dongxi Liu (Data61, CSIRO), Surya Nepal (Data61, CSIRO)

Read More

WIP: Interrupt Attack on TEE-protected Robotic Vehicles

Mulong Luo (Cornell University) and G. Edward Suh (Cornell University)

Read More

CANCloak: Deceiving Two ECUs with One Frame

Li Yue, Zheming Li, Tingting Yin, and Chao Zhang (Tsinghua University)

Read More

Bringing Balance to the Force: Dynamic Analysis of the...

Abdallah Dawoud (CISPA Helmholtz Center for Information Security), Sven Bugiel (CISPA Helmholtz Center for Information Security)

Read More