Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

(Short) WIP: End-to-End Analysis of Adversarial Attacks to Automated...

Hengyi Liang, Ruochen Jiao (Northwestern University), Takami Sato, Junjie Shen, Qi Alfred Chen (UC Irvine), and Qi Zhu (Northwestern University) Best Short Paper Award Winner!

Read More

EarArray: Defending against DolphinAttack via Acoustic Attenuation

Guoming Zhang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Xinfeng Li (Zhejiang University), Gang Qu (University of Maryland), Wenyuan Xu (Zhejing University)

Read More

HERA: Hotpatching of Embedded Real-time Applications

Christian Niesler (University of Duisburg-Essen), Sebastian Surminski (University of Duisburg-Essen), Lucas Davi (University of Duisburg-Essen)

Read More

SpecTaint: Speculative Taint Analysis for Discovering Spectre Gadgets

Zhenxiao Qi (UC Riverside), Qian Feng (Baidu USA), Yueqiang Cheng (NIO Security Research), Mengjia Yan (MIT), Peng Li (ByteDance), Heng Yin (UC Riverside), Tao Wei (Ant Group)

Read More