Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Robust perception is crucial for autonomous vehicle security. In this work, we design a practical adversarial patch attack against camera-based obstacle detection. We identify that the back of a box truck is an effective attack vector. We also improve attack robustness by considering a variety of input frames associated with the attack scenario. This demo includes videos that show our attack can cause end-to-end consequences on a representative autonomous driving system in a simulator.

View More Papers

Demo #9: Dynamic Time Warping as a Tool for...

Mars Rayno (Colorado State University) and Jeremy Daily (Colorado State University)

Read More

More than a Fair Share: Network Data Remanence Attacks...

Leila Rashidi (University of Calgary), Daniel Kostecki (Northeastern University), Alexander James (University of Calgary), Anthony Peterson (Northeastern University), Majid Ghaderi (University of Calgary), Samuel Jero (MIT Lincoln Laboratory), Cristina Nita-Rotaru (Northeastern University), Hamed Okhravi (MIT Lincoln Laboratory), Reihaneh Safavi-Naini (University of Calgary)

Read More

Short Paper: Declarative Demand-Driven Reverse Engineering

Yihao Sun, Jeffrey Ching, Kristopher Micinski (Department of Electical Engineering and Computer Science, Syracuse University)

Read More

Raising Trust in the Food Supply Chain

Alexander Krumpholz, Marthie Grobler, Raj Gaire, Claire Mason, Shanae Burns (CSIRO Data61)

Read More