Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

LiDAR (Light Detection and Ranging) is an essential sensor for autonomous driving (AD), increasingly being integrated not only in prototype vehicles but also in commodity vehicles. Due to its critical safety implications, recent studies have explored its security risks and exposed the potential vulnerability against LiDAR spoofing attacks, which manipulate measurement data by emitting malicious lasers into the LiDAR. Nevertheless, deploying LiDAR spoofing attacks against driving AD vehicles still has significant technical challenges particularly in accurately aiming at the LiDAR of a moving AV from the roadside. The current state-of-the-art attack can be successful only at ≤5 km/h. Motivated by this, we design novel tracking and aiming methodology and conduct a feasibility study to explore the actual practicality of LiDAR spoofing attacks against AD vehicles at cruising speeds. In this work, we report our initial results demonstrating that our object removal attack successfully makes the targeted pedestrian undetectable with ≥90% success rates in a real-world scenario where the adversary at the roadside attacks the victim AD approaching at 35 km/h. Finally, we discuss the current challenges and our future plans.

View More Papers

Space-Domain AI Applications need Rigorous Security Risk Analysis

Alexandra Weber (Telespazio Germany GmbH), Peter Franke (Telespazio Germany GmbH)

Read More

Securing EV charging system against Physical-layer Signal Injection Attack...

Soyeon Son (Korea University) Kyungho Joo (Korea University) Wonsuk Choi (Korea University) Dong Hoon Lee (Korea University)

Read More

Benchmarking transferable adversarial attacks

Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

Read More

A Preliminary Study on Using Large Language Models in...

Kumar Shashwat, Francis Hahn, Xinming Ou, Dmitry Goldgof, Jay Ligatti, Larrence Hall (University of South Florida), S. Raj Rajagoppalan (Resideo), Armin Ziaie Tabari (CipherArmor)

Read More