Taifeng Liu (Xidian University), Yang Liu (Xidian University), Zhuo Ma (Xidian University), Tong Yang (Peking University), Xinjing Liu (Xidian University), Teng Li (Xidian University), Jianfeng Ma (Xidian University)

The vision-based perception modules in autonomous vehicles (AVs) are prone to physical adversarial patch attacks. However, most existing attacks indiscriminately affect all passing vehicles. This paper introduces L-HAWK, a novel controllable physical adversarial patch activated by long-distance laser signals. L-HAWK is designed to target specific vehicles when the adversarial patch is triggered by laser signals while remaining benign under normal conditions. To achieve this goal and address the unique challenges associated with laser signals, we propose an asynchronous learning method for L-HAWK to determine the optimal laser parameters and the corresponding adversarial patch. To enhance the attack robustness in real-world scenarios, we introduce a multi-angle and multi-position simulation mechanism, a noise approximation approach, and a progressive sampling-based method. L-HAWK has been validated through extensive experiments in both digital and physical environments. Compared to a 59% success rate of TPatch (Usenix ’23) at 7 meters, L-HAWK achieves a 91.9% average attack success rate at 50 meters. This represents a 56% improvement in attack success rate and a more than sevenfold increase in attack distance.

View More Papers

LLM-xApp: A Large Language Model Empowered Radio Resource Management...

Xingqi Wu (University of Michigan-Dearborn), Junaid Farooq (University of Michigan-Dearborn), Yuhui Wang (University of Michigan-Dearborn), Juntao Chen (Fordham University)

Read More

CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian...

Kaiyuan Zhang (Purdue University), Siyuan Cheng (Purdue University), Guangyu Shen (Purdue University), Bruno Ribeiro (Purdue University), Shengwei An (Purdue University), Pin-Yu Chen (IBM Research AI), Xiangyu Zhang (Purdue University), Ninghui Li (Purdue University)

Read More

LADDER: Multi-Objective Backdoor Attack via Evolutionary Algorithm

Dazhuang Liu (Delft University of Technology), Yanqi Qiao (Delft University of Technology), Rui Wang (Delft University of Technology), Kaitai Liang (Delft University of Technology), Georgios Smaragdakis (Delft University of Technology)

Read More

NodeMedic-FINE: Automatic Detection and Exploit Synthesis for Node.js Vulnerabilities

Darion Cassel (Carnegie Mellon University), Nuno Sabino (IST & CMU), Min-Chien Hsu (Carnegie Mellon University), Ruben Martins (Carnegie Mellon University), Limin Jia (Carnegie Mellon University)

Read More