Shiqing Ma (Purdue University), Yingqi Liu (Purdue University), Guanhong Tao (Purdue University), Wen-Chuan Lee (Purdue University), Xiangyu Zhang (Purdue University)

Deep Neural Networks (DNN) are vulnerable to adversarial samples that
are generated by perturbing correctly classified inputs to cause DNN
models to misbehave (e.g., misclassification). This can potentially
lead to disastrous consequences especially in security-sensitive
applications. Existing defense and detection techniques work well for
specific attacks under various assumptions (e.g., the set of possible
attacks are known beforehand). However, they are not sufficiently
general to protect against a broader range of attacks. In this paper,
we analyze the internals of DNN models under various attacks and
identify two common exploitation channels: the provenance channel and
the activation value distribution channel. We then propose a novel
technique to extract DNN invariants and use them to perform runtime
adversarial sample detection. Our experimental results of 11 different
kinds of attacks on popular datasets including ImageNet and 13 models
show that our technique can effectively detect all these attacks
(over 90% accuracy) with limited false positives. We also compare it
with three state-of-the-art techniques including the Local Intrinsic
Dimensionality (LID) based method, denoiser based methods (i.e.,
MagNet and HGD), and the prediction inconsistency based approach
(i.e., feature squeezing). Our experiments show promising results.

View More Papers

Privacy Attacks to the 4G and 5G Cellular Paging...

Syed Rafiul Hussain (Purdue University), Mitziu Echeverria (University of Iowa), Omar Chowdhury (University of Iowa), Ninghui Li (Purdue University), Elisa Bertino (Purdue University)

Read More

Giving State to the Stateless: Augmenting Trustworthy Computation with...

Gabriel Kaptchuk (Johns Hopkins University), Matthew Green (Johns Hopkins University), Ian Miers (Cornell Tech)

Read More

ML-Leaks: Model and Data Independent Membership Inference Attacks and...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More

Enemy At the Gateways: Censorship-Resilient Proxy Distribution Using Game...

Milad Nasr (University of Massachusetts Amherst), Sadegh Farhang (Pennsylvania State University), Amir Houmansadr (University of Massachusetts Amherst), Jens Grossklags (Technical University of Munich)

Read More