Zizhi Jin (Zhejiang University), Qinhong Jiang (Zhejiang University), Xuancun Lu (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

LiDAR (Light Detection and Ranging) is a pivotal sensor for autonomous driving, offering precise 3D spatial information.
Previous signal attacks against LiDAR systems mainly exploit laser signals. In this paper, we investigate the possibility of cross-modality signal injection attacks, i.e., injecting intentional electromagnetic interference (IEMI) to manipulate LiDAR output. Our insight is that the internal modules of a LiDAR, i.e., the laser receiving circuit, the monitoring sensors, and the beam-steering modules, even with strict electromagnetic compatibility (EMC) testing, can still couple with the IEMI attack signals and result in the malfunction of LiDAR systems. Based on the above attack surfaces, we propose the alias attack, which manipulates LiDAR output in terms of textit{Points Interference}, textit{Points Injection}, textit{Points Removal}, and even textit{LiDAR Power-Off}.
We evaluate and demonstrate the effectiveness of alias with both simulated and real-world experiments on five COTS LiDAR systems.
We also conduct feasibility experiments in real-world moving scenarios.
We provide potential defense measures that can be implemented at both the sensor level and the vehicle system level to mitigate the risks associated with IEMI attacks. Video demonstrations can be viewed at textcolor{blue}{href{https://sites.google.com/view/phantomlidar}{https://sites.google.com/view/phantomlidar}}.

View More Papers

ERW-Radar: An Adaptive Detection System against Evasive Ransomware by...

Lingbo Zhao (Institute of Information Engineering, Chinese Academy of Sciences), Yuhui Zhang (Institute of Information Engineering, Chinese Academy of Sciences), Zhilu Wang (Institute of Information Engineering, Chinese Academy of Sciences), Fengkai Yuan (Institute of Information Engineering, CAS), Rui Hou (Institute of Information Engineering, Chinese Academy of Sciences)

Read More

Understanding Data Importance in Machine Learning Attacks: Does Valuable...

Rui Wen (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

Try to Poison My Deep Learning Data? Nowhere to...

Yansong Gao (The University of Western Australia), Huaibing Peng (Nanjing University of Science and Technology), Hua Ma (CSIRO's Data61), Zhi Zhang (The University of Western Australia), Shuo Wang (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Anmin Fu (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61), Derek Abbott (The University of Adelaide, Australia)

Read More

Incorporating Gradients to Rules: Towards Lightweight, Adaptive Provenance-based Intrusion...

Lingzhi Wang (Northwestern University), Xiangmin Shen (Northwestern University), Weijian Li (Northwestern University), Zhenyuan LI (Zhejiang University), R. Sekar (Stony Brook University), Han Liu (Northwestern University), Yan Chen (Northwestern University)

Read More