Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

CRCount: Pointer Invalidation with Reference Counting to Mitigate Use-after-free...

Jangseop Shin (Seoul National University and Inter-University Semiconductor Research Center), Donghyun Kwon (Seoul National University and Inter-University Semiconductor Research Center), Jiwon Seo (Seoul National University and Inter-University Semiconductor Research Center), Yeongpil Cho (Soongsil University), Yunheung Paek (Seoul National University and Inter-University Semiconductor Research Center)

Read More

Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic...

Lea Schönherr (Ruhr University Bochum), Katharina Kohls (Ruhr University Bochum), Steffen Zeiler (Ruhr University Bochum), Thorsten Holz (Ruhr University Bochum), Dorothea Kolossa (Ruhr University Bochum)

Read More

We Value Your Privacy ... Now Take Some Cookies:...

Martin Degeling (Ruhr-Universität Bochum), Christine Utz (Ruhr-Universität Bochum), Christopher Lentzsch (Ruhr-Universität Bochum), Henry Hosseini (Ruhr-Universität Bochum), Florian Schaub (University of Michigan), Thorsten Holz (Ruhr-Universität Bochum)

Read More

Cybercriminal Minds: An investigative study of cryptocurrency abuses in...

Seunghyeon Lee (KAIST, S2W LAB Inc.), Changhoon Yoon (S2W LAB Inc.), Heedo Kang (KAIST), Yeonkeun Kim (KAIST), Yongdae Kim (KAIST), Dongsu Han (KAIST), Sooel Son (KAIST), Seungwon Shin (KAIST, S2W LAB Inc.)

Read More