Rui Zhu (Indiana University Bloominton), Di Tang (Indiana University Bloomington), Siyuan Tang (Indiana University Bloomington), Zihao Wang (Indiana University Bloomington), Guanhong Tao (Purdue University), Shiqing Ma (University of Massachusetts Amherst), XiaoFeng Wang (Indiana University Bloomington), Haixu Tang (Indiana University, Bloomington)

Most existing methods to detect backdoored machine learning (ML) models take one of the two approaches: trigger inversion (aka. reverse engineer) and weight analysis (aka. model diagnosis). In particular, the gradient-based trigger inversion is considered to be among the most effective backdoor detection techniques, as evidenced by the TrojAI competition, Trojan Detection Challenge and backdoorBench. However, little has been done to understand why this technique works so well and, more importantly, whether it raises the bar to the backdoor attack. In this paper, we report the first attempt to answer this question by analyzing the change rate of the backdoored model's output around its trigger-carrying inputs. Our study shows that existing attacks tend to inject the backdoor characterized by a low change rate around trigger-carrying inputs, which are easy to capture by gradient-based trigger inversion. In the meantime, we found that the low change rate is not necessary for a backdoor attack to succeed: we design a new attack enhancement method called Gradient Shaping (GRASP), which follows the opposite direction of adversarial training to reduce the change rate of a backdoored model with regard to the trigger, without undermining its backdoor effect. Also, we provide a theoretic analysis to explain the effectiveness of this new technique and the fundamental weakness of gradient-based trigger inversion. Finally, we perform both theoretical and experimental analysis, showing that the GRASP enhancement does not reduce the effectiveness of the stealthy attacks designed to evade the backdoor detection methods based on weight analysis, as well as other backdoor mitigation methods without using detection.

View More Papers

BreakSPF: How Shared Infrastructures Magnify SPF Vulnerabilities Across the...

Chuhan Wang (Tsinghua University), Yasuhiro Kuranaga (Tsinghua University), Yihang Wang (Tsinghua University), Mingming Zhang (Zhongguancun Laboratory), Linkai Zheng (Tsinghua University), Xiang Li (Tsinghua University), Jianjun Chen (Tsinghua University; Zhongguancun Laboratory), Haixin Duan (Tsinghua University; Quan Cheng Lab; Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd)

Read More

SigmaDiff: Semantics-Aware Deep Graph Matching for Pseudocode Diffing

Lian Gao (University of California Riverside), Yu Qu (University of California Riverside), Sheng Yu (University of California, Riverside & Deepbits Technology Inc.), Yue Duan (Singapore Management University), Heng Yin (University of California, Riverside & Deepbits Technology Inc.)

Read More

Programmer's Perception of Sensitive Information in Code

Xinyao Ma, Ambarish Aniruddha Gurjar, Anesu Christopher Chaora, Tatiana R Ringenberg, L. Jean Camp (Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington)

Read More

Facilitating Non-Intrusive In-Vivo Firmware Testing with Stateless Instrumentation

Jiameng Shi (University of Georgia), Wenqiang Li (Independent Researcher), Wenwen Wang (University of Georgia), Le Guan (University of Georgia)

Read More