Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

Integrity of sensor measurement is crucial for safe and reliable autonomous driving, and researchers are actively studying physical-world injection attacks against light detection and ranging (LiDAR). Conventional work focused on object/obstacle detectors, and its impact on LiDAR-based simultaneous localization and mapping (SLAM) has been an open research problem. Addressing the issue, we evaluate the robustness of a scan-matching SLAM algorithm in the simulation environment based on the attacker capability characterized by indoor and outdoor physical experiments. Our attack is based on Sato et al.’s asynchronous random spoofing attack that penetrates randomization countermeasures in modern LiDARs. The attack is effective with fake points injected behind the victim vehicle and potentially evades detection-based countermeasures working within the range of object detectors. We discover that mapping is susceptible toward the z-axis, the direction perpendicular to the ground, because feature points are scarce either in the sky or on the road. The attack results in significant changes in the map, such as a downhill converted into an uphill. The false map induces errors to the self-position estimation on the x-y plane in each frame, which accumulates over time. In our experiment, after making laser injection for 5 meters (i.e. 1 second), the victim SLAM’s self-position begins and continues to diverge from the reality, resulting in the 5m shift to the right after running 125 meters. The false map and self-position significantly affect the motion planning algorithm, too; the planned trajectory changes by 3◦ with which the victim vehicle will enter the opposite lane after running 35 meters. Finally, we discuss possible mitigations against the proposed attack.

View More Papers

TrustSketch: Trustworthy Sketch-based Telemetry on Cloud Hosts

Zhuo Cheng (Carnegie Mellon University), Maria Apostolaki (Princeton University), Zaoxing Liu (University of Maryland), Vyas Sekar (Carnegie Mellon University)

Read More

Stacking up the LLM Risks: Applied Machine Learning Security

Dr. Gary McGraw, Berryville Institute of Machine Learning

Read More

EnclaveFuzz: Finding Vulnerabilities in SGX Applications

Liheng Chen (Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences; Institute for Network Science and Cyberspace of Tsinghua University), Zheming Li (Institute for Network Science and Cyberspace of Tsinghua University), Zheyu Ma (Institute for Network Science and Cyberspace of Tsinghua University), Yuan Li (Tsinghua University),…

Read More