Hengyi Liang, Ruochen Jiao (Northwestern University), Takami Sato, Junjie Shen, Qi Alfred Chen (UC Irvine), and Qi Zhu (Northwestern University)

Best Short Paper Award Winner!

Machine learning techniques, particularly those based on deep neural networks (DNNs), are widely adopted in the development of advanced driver-assistance systems (ADAS) and autonomous vehicles. While providing significant improvement over traditional methods in average performance, the usage of DNNs also presents great challenges to system safety, especially given the uncertainty of the surrounding environment, the disturbance to system operations, and the current lack of methodologies for predicting DNN behavior. In particular, adversarial attacks to the sensing input may cause errors in systems’ perception of the environment and lead to system failure. However, existing works mainly focus on analyzing the impact of such attacks on the sensing and perception results and designing mitigation strategies accordingly. We argue that as system safety is ultimately determined by the actions it takes, it is essential to take an end-to-end approach and address adversarial attacks with the consideration of the entire ADAS or autonomous driving pipeline, from sensing and perception to planing, navigation and control. In this paper, we present our initial findings in quantitatively analyzing the impact of a type of adversarial attack (that leverages road patch) on system planning and control, and discuss some of the possible directions to systematically address such attack with an end-to-end view.

View More Papers

PHOENIX: Device-Centric Cellular Network Protocol Monitoring using Runtime Verification

Mitziu Echeverria (The University of Iowa), Zeeshan Ahmed (The University of Iowa), Bincheng Wang (The University of Iowa), M. Fareed Arif (The University of Iowa), Syed Rafiul Hussain (Pennsylvania State University), Omar Chowdhury (The University of Iowa)

Read More

Physical Layer Data Manipulation Attacks on the CAN Bus

Abdullah Zubair Mohammed (Virginia Tech), Yanmao Man (University of Arizona), Ryan Gerdes (Virginia Tech), Ming Li (University of Arizona) and Z. Berkay Celik (Purdue University)

Read More

POP and PUSH: Demystifying and Defending against (Mach) Port-oriented...

Min Zheng (Orion Security Lab, Alibaba Group), Xiaolong Bai (Orion Security Lab, Alibaba Group), Yajin Zhou (Zhejiang University), Chao Zhang (Institute for Network Science and Cyberspace, Tsinghua University), Fuping Qu (Orion Security Lab, Alibaba Group)

Read More

EarArray: Defending against DolphinAttack via Acoustic Attenuation

Guoming Zhang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Xinfeng Li (Zhejiang University), Gang Qu (University of Maryland), Wenyuan Xu (Zhejing University)

Read More