Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

OblivSketch: Oblivious Network Measurement as a Cloud Service

Shangqi Lai (Monash University), Xingliang Yuan (Monash University), Joseph K. Liu (Monash University), Xun Yi (RMIT University), Qi Li (Tsinghua University), Dongxi Liu (Data61, CSIRO), Surya Nepal (Data61, CSIRO)

Read More

From Library Portability to Para-rehosting: Natively Executing Microcontroller Software...

Wenqiang Li (State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences; Department of Computer Science, the University of Georgia, USA; School of Cyber Security, University of Chinese Academy of Sciences; Department of Electrical Engineering and Computer Science, the University of Kansas, USA), Le Guan (Department of Computer Science, the University…

Read More

Доверя́й, но проверя́й: SFI safety for native-compiled Wasm

Evan Johnson (University of California San Diego), David Thien (University of California San Diego), Yousef Alhessi (University of California San Diego), Shravan Narayan (University Of California San Diego), Fraser Brown (Stanford University), Sorin Lerner (University of California San Diego), Tyler McMullen (Fastly Labs), Stefan Savage (University of California San Diego), Deian Stefan (University of California…

Read More