Adrian Shuai Li (Purdue University), Arun Iyengar (Intelligent Data Management and Analytics, LLC), Ashish Kundu (Cisco Research), Elisa Bertino (Purdue University)

In applying deep learning for malware classification, it is crucial to account for the prevalence of malware evolution, which can cause trained classifiers to fail on drifted malware. Existing solutions to address concept drift use active learning. They select new samples for analysts to label and then retrain the classifier with the new labels. Our key finding is that the current retraining techniques do not achieve optimal results. These techniques overlook that updating the model with scarce drifted samples requires learning features that remain consistent across pre-drift and post-drift data. The model should thus be able to disregard specific features that, while beneficial for the classification of pre-drift data, are absent in post-drift data, thereby preventing prediction degradation. In this paper, we propose a new technique for detecting and classifying drifted malware that learns drift-invariant features in malware control flow graphs by leveraging graph neural networks with adversarial domain adaptation. We compare it with existing model retraining methods in active learning-based malware detection systems and other domain adaptation techniques from the vision domain. Our approach significantly improves drifted malware detection on publicly available benchmarks and real-world malware databases reported daily by security companies in 2024. We also tested our approach in predicting multiple malware families drifted over time. A thorough evaluation shows that our approach outperforms the state-of-the-art approaches.

View More Papers

Defending Against Membership Inference Attacks on Iteratively Pruned Deep...

Jing Shang (Beijing Jiaotong University), Jian Wang (Beijing Jiaotong University), Kailun Wang (Beijing Jiaotong University), Jiqiang Liu (Beijing Jiaotong University), Nan Jiang (Beijing University of Technology), Md Armanuzzaman (Northeastern University), Ziming Zhao (Northeastern University)

Read More

Black-box Membership Inference Attacks against Fine-tuned Diffusion Models

Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

Read More

Passive Inference Attacks on Split Learning via Adversarial Regularization

Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

Read More

ReThink: Reveal the Threat of Electromagnetic Interference on Power...

Fengchen Yang (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Zihao Dan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Kaikai Pan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Chen Yan (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Xiaoyu Ji (Zhejiang University; ZJU QI-ANXIN IoT Security Joint Labratory), Wenyuan Xu (Zhejiang University; ZJU…

Read More