Ziwen Wan (University of California, Irvine), Junjie Shen (University of California, Irvine), Jalen Chuang (University of California, Irvine), Xin Xia (The University of California, Los Angeles), Joshua Garcia (University of California, Irvine), Jiaqi Ma (The University of California, Los Angeles), Qi Alfred Chen (University of California, Irvine)

In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly security-critical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes.

To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problem-specific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previously-unknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.

View More Papers

SoK: A Proposal for Incorporating Gamified Cybersecurity Awareness in...

June De La Cruz (INSPIRIT Lab, University of Denver), Sanchari Das (INSPIRIT Lab, University of Denver)

Read More

GPSKey: GPS based Secret Key Establishment for Intra-Vehicle Environment

Edwin Yang (University of Oklahoma) and Song Fang (University of Oklahoma)

Read More

EMS: History-Driven Mutation for Coverage-based Fuzzing

Chenyang Lyu (Zhejiang University), Shouling Ji (Zhejiang University), Xuhong Zhang (Zhejiang University & Zhejiang University NGICS Platform), Hong Liang (Zhejiang University), Binbin Zhao (Georgia Institute of Technology), Kangjie Lu (University of Minnesota), Raheem Beyah (Georgia Institute of Technology)

Read More

Demo #13: Attacking LiDAR Semantic Segmentation in Autonomous Driving

Yi Zhu (State University of New York at Buffalo), Chenglin Miao (University of Georgia), Foad Hajiaghajani (State University of New York at Buffalo), Mengdi Huai (University of Virginia), Lu Su (Purdue University) and Chunming Qiao (State University of New York at Buffalo)

Read More