Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Machine-learning models are known to be vulnerable to evasion attacks that perturb model inputs to induce misclassifications. In this work, we identify real-world scenarios where the true threat cannot be assessed accurately by existing attacks. Specifically, we find that conventional metrics measuring targeted and untargeted robustness do not appropriately reflect a model's ability to withstand attacks from one set of source classes to another text set of target classes. To address the shortcomings of existing methods, we formally define a new metric, termed group-based robustness, that complements existing metrics and is better-suited for evaluating model performance in certain attack scenarios. We show empirically that group-based robustness allows us to distinguish between models' vulnerability against specific threat models in situations where traditional robustness metrics do not apply. Moreover, to measure group-based robustness efficiently and accurately, we 1) propose two loss functions and 2) identify three new attack strategies. We show empirically that with comparable success rates, finding evasive samples using our new loss functions saves computation by a factor as large as the number of targeted classes, and finding evasive samples using our new attack strategies saves time by up to 99% compared to brute-force search methods. Finally, we propose a defense method that increases group-based robustness by up to 3.52 times.

View More Papers

Beyond the Bytes: Understanding the Limitations of Intrinsic Binary...

Peter Lafosse (Owner and Co-Founder of Vector 35 Inc.)

Read More

FirmDiff: Improving the Configuration of Linux Kernels Geared Towards...

Ioannis Angelakopoulos (Boston University), Gianluca Stringhini (Boston University), Manuel Egele (Boston University)

Read More

Front-running Attack in Sharded Blockchains and Fair Cross-shard Consensus

Jianting Zhang (Purdue University), Wuhui Chen (Sun Yat-sen University), Sifu Luo (Sun Yat-sen University), Tiantian Gong (Purdue University), Zicong Hong (The Hong Kong Polytechnic University), Aniket Kate (Purdue University)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More