Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Digital Technologies in Pandemic: The Good, the Bad and...

Moderator: Ahmad-Reza Sadeghi, TU Darmstadt, Germany Panelists: Mario Guglielmetti, Legal Officer, European Data Protection Supervisor* Jaap-Henk Hoepman, Radbaud University, The Netherlands Alexandra Dmitrienko, University of Würzburg, Germany, Farinaz Koushanfar, UCSD, USA *attending in his personal capacity

Read More

Let’s Stride Blindfolded in a Forest: Sublinear Multi-Client Decision...

Jack P. K. Ma (The Chinese University of Hong Kong), Raymond K. H. Tai (The Chinese University of Hong Kong), Yongjun Zhao (Nanyang Technological University), Sherman S.M. Chow (The Chinese University of Hong Kong)

Read More

Model-Agnostic Defense for Lane Detection against Adversarial Attack

Henry Xu, An Ju, and David Wagner (UC Berkeley) Baidu Security Auto-Driving Security Award Winner ($1000 cash prize)!

Read More

Rosita: Towards Automatic Elimination of Power-Analysis Leakage in Ciphers

Madura A. Shelton (University of Adelaide), Niels Samwel (Radboud University), Lejla Batina (Radboud University), Francesco Regazzoni (University of Amsterdam and ALaRI – USI), Markus Wagner (University of Adelaide), Yuval Yarom (University of Adelaide and Data61)

Read More