Alireza Mohammadi (University of Michigan-Dearborn), Hafiz Malik (University of Michigan-Dearborn) and Masoud Abbaszadeh (GE Global Research)

Recent automotive hacking incidences have demonstrated that when an adversary manages to gain access to a safety-critical CAN, severe safety implications will ensue. Under such threats, this paper explores the capabilities of an adversary who is interested in engaging the car brakes at full speed and would like to cause wheel lockup conditions leading to catastrophic road injuries. This paper shows that the physical capabilities of a CAN attacker can be studied through the lens of closed-loop attack policy design. In particular, it is demonstrated that the adversary can cause wheel lockups by means of closed-loop attack policies for commanding the frictional brake actuators under a limited knowledge of the tire-road interaction characteristics. The effectiveness of the proposed wheel lockup attack policy is shown via numerical simulations under different road conditions.

View More Papers

Demo #15: Remote Adversarial Attack on Automated Lane Centering

Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)

Read More

Generating 3D Adversarial Point Clouds under the Principle of...

Bo Yang (Zhejiang University), Yushi Cheng (Tsinghua University), Zizhi Jin (Zhejiang University), Xiaoyu Ji (Zhejiang University) and Wenyuan Xu (Zhejiang University)

Read More

(Short) Object Removal Attacks on LiDAR-based 3D Object Detectors

Zhongyuan Hau, Kenneth Co, Soteris Demetriou, and Emil Lupu (Imperial College London) Best Short Paper Award Runner-up!

Read More