Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

SABRE: Protecting Bitcoin against Routing Attacks

Maria Apostolaki (ETH Zurich), Gian Marti (ETH Zurich), Jan Müller (ETH Zurich), Laurent Vanbever (ETH Zurich)

Read More

Countering Malicious Processes with Process-DNS Association

Suphannee Sivakorn (Columbia University), Kangkook Jee (NEC Labs America), Yixin Sun (Princeton University), Lauri Korts-Pärn (Cyber Defense Institute), Zhichun Li (NEC Labs America), Cristian Lumezanu (NEC Labs America), Zhenyu Wu (NEC Labs America), Lu-An Tang (NEC Labs America), Ding Li (NEC Labs America)

Read More

Stealthy Adversarial Perturbations Against Real-Time Video Classification Systems

Shasha Li (University of California Riverside), Ajaya Neupane (University of California Riverside), Sujoy Paul (University of California Riverside), Chengyu Song (University of California Riverside), Srikanth V. Krishnamurthy (University of California Riverside), Amit K. Roy Chowdhury (University of California Riverside), Ananthram Swami (United States Army Research Laboratory)

Read More

Neural Machine Translation Inspired Binary Code Similarity Comparison beyond...

Fei Zuo (University of South Carolina), Xiaopeng Li (University of South Carolina), Patrick Young (Temple University), Lannan Luo (University of South Carolina), Qiang Zeng (University of South Carolina), Zhexin Zhang (University of South Carolina)

Read More