Michele Marazzi, Stefano Longari, Michele Carminati, Stefano Zanero (Politecnico di Milano)

ZOOX AutoDriving Security Award Runner-up!

With the increasing interest in autonomous vehicles (AVs), ensuring their safety and security is becoming crucial. The introduction of advanced features has increased the need for various interfaces to communicate with the external world, creating new potential attack vectors that attackers can exploit to alter sensor data. LiDAR sensors are widely employed to support autonomous driving features and generate point cloud data used by ADAS to 3D map the vehicle’s surroundings. Tampering attacks on LiDAR-generated data can compromise the vehicle’s functionalities and seriously threaten passengers and other road users. Existing approaches to LiDAR data tampering detection show security flaws and can be bypassed by attackers through design vulnerabilities. This paper proposes a novel approach for tampering detection of LiDAR-generated data in AVs, employing a watermarking technique. We validate our approach through experiments to prove its feasibility in realworld time-constrained scenarios and its efficacy in detecting LiDAR tampering attacks. Our approach performs better when compared to the current state-of-the-art LiDAR watermarking techniques while addressing critical issues related to watermark security and imperceptibility.

View More Papers

UniID: Spoofing Face Authentication System by Universal Identity

Zhihao Wu (Zhejiang University), Yushi Cheng (Zhejiang University), Shibo Zhang (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejing University)

Read More

Not your Type! Detecting Storage Collision Vulnerabilities in Ethereum...

Nicola Ruaro (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Robert McLaughlin (University of California, Santa Barbara), Ilya Grishchenko (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Hengzhi Pei (UIUC), Jinyuan Jia (UIUC, Penn State), Wenbo Guo (UC Berkeley, Purdue University), Bo Li (UIUC), Dawn Song (UC Berkeley)

Read More

MOCK: Optimizing Kernel Fuzzing Mutation with Context-aware Dependency

Jiacheng Xu (Zhejiang University), Xuhong Zhang (Zhejiang University), Shouling Ji (Zhejiang University), Yuan Tian (UCLA), Binbin Zhao (Georgia Institute of Technology), Qinying Wang (Zhejiang University), Peng Cheng (Zhejiang University), Jiming Chen (Zhejiang University)

Read More