Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University)

In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor poisoning attacks on ML malware classifiers. These attacks aim to manipulate model behavior when provided with a particular input trigger. For instance, adversaries could inject malicious samples into public malware repositories, contaminating the training data and potentially misclassifying malware by the ML model. Current countermeasures predominantly focus on detecting poisoned samples by leveraging disagreements within the outputs of a diverse set of ensemble models on training data points.
However, these methods are not applicable in scenarios involving ML-as-a-Service (MLaaS) or for users who seek to purify a backdoored model post-training. Addressing this scenario, we introduce PBP, a post-training defense for malware classifiers that mitigates various types of backdoor embeddings without assuming any specific backdoor embedding mechanism. Our method exploits the influence of backdoor attacks on the activation distribution of neural networks, independent of the trigger-embedding method.
In the presence of a backdoor attack, the activation distribution of each layer is distorted into a mixture of distributions. By regulating the statistics of the batch normalization layers, we can guide a backdoored model to perform similarly to a clean one. Our method demonstrates substantial advantages over several state-of-the-art methods, as evidenced by experiments on two datasets, two types of backdoor methods, and various attack configurations. Our experiments showcase that PBP can mitigate even the SOTA backdoor attacks for malware classifiers, e.g., Jigsaw Puzzle, which was previously demonstrated to be stealthy against existing backdoor defenses. Notably, your approach requires only a small portion of the training data --- only 1% --- to purify the backdoor and reduce the attack success rate from 100% to almost 0%, a 100-fold improvement over the baseline methods. Our code is available at https://github.com/judydnguyen/pbp-backdoor-purification-official.

View More Papers

UI-CTX: Understanding UI Behaviors with Code Contexts for Mobile...

Jiawei Li (Beihang University & National University of Singapore), Jiahao Liu (National University of Singapore), Jian Mao (Beihang University), Jun Zeng (National University of Singapore), Zhenkai Liang (National University of Singapore)

Read More

MTZK: Testing and Exploring Bugs in Zero-Knowledge (ZK) Compilers

Dongwei Xiao (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yiteng Peng (The Hong Kong University of Science and Technology), Shuai Wang (The Hong Kong University of Science and Technology)

Read More

LightAntenna: Characterizing the Limits of Fluorescent Lamp-Induced Electromagnetic Interference

Fengchen Yang (Zhejiang University), Wenze Cui (Zhejiang University), Xinfeng Li (Zhejiang University), Chen Yan (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More

Towards Anonymous Chatbots with (Un)Trustworthy Browser Proxies

Dzung Pham, Jade Sheffey, Chau Minh Pham, and Amir Houmansadr (University of Massachusetts Amherst)

Read More