Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

ZOOX AutoDriving Security Award Runner-up!

With the increasing interest in autonomous vehicles (AVs), ensuring their safety and security is becoming crucial. The introduction of advanced features has increased the need for various interfaces to communicate with the external world, creating new potential attack vectors that attackers can exploit to alter sensor data. LiDAR sensors are widely employed to support autonomous driving features and generate point cloud data used by ADAS to 3D map the vehicle’s surroundings. Tampering attacks on LiDAR-generated data can compromise the vehicle’s functionalities and seriously threaten passengers and other road users. Existing approaches to LiDAR data tampering detection show security flaws and can be bypassed by attackers through design vulnerabilities. This paper proposes a novel approach for tampering detection of LiDAR-generated data in AVs, employing a watermarking technique. We validate our approach through experiments to prove its feasibility in realworld time-constrained scenarios and its efficacy in detecting LiDAR tampering attacks. Our approach performs better when compared to the current state-of-the-art LiDAR watermarking techniques while addressing critical issues related to watermark security and imperceptibility.

View More Papers

Exploring the Influence of Prompts in LLMs for Security-Related...

Weiheng Bai (University of Minnesota), Qiushi Wu (IBM Research), Kefu Wu, Kangjie Lu (University of Minnesota)

Read More

Understanding Route Origin Validation (ROV) Deployment in the Real...

Lancheng Qin (Tsinghua University, BNRist), Li Chen (Zhongguancun Laboratory), Dan Li (Tsinghua University, Zhongguancun Laboratory), Honglin Ye (Tsinghua University), Yutian Wang (Tsinghua University)

Read More

5G-Spector: An O-RAN Compliant Layer-3 Cellular Attack Detection Service

Haohuang Wen (The Ohio State University), Phillip Porras (SRI International), Vinod Yegneswaran (SRI International), Ashish Gehani (SRI International), Zhiqiang Lin (The Ohio State University)

Read More

Stacking up the LLM Risks: Applied Machine Learning Security

Dr. Gary McGraw, Berryville Institute of Machine Learning

Read More