Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard-label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted hard-label from a model query to craft adversarial examples is distinguished as a decision-based attack.

In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial examples employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion and higher attack success rate within a given query budget. We curate our results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts.

View More Papers

Towards a TEE-based V2V Protocol for Connected and Autonomous...

Mohit Kumar Jangid (Ohio State University) and Zhiqiang Lin (Ohio State University)

Read More

WIP: Infrastructure-Aided Defense for Autonomous Driving Systems: Opportunities and...

Yunpeng Luo (UC Irvine), Ningfei Wang (UC Irvine), Bo Yu (PerceptIn), Shaoshan Liu (PerceptIn) and Qi Alfred Chen (UC Irvine)

Read More

Demo #15: Remote Adversarial Attack on Automated Lane Centering

Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)

Read More

Demo #11: Understanding the Effects of Paint Colors on...

Shaik Sabiha (University at Buffalo), Keyan Guo (University at Buffalo), Foad Hajiaghajani (University at Buffalo), Chunming Qiao (University at Buffalo), Hongxin Hu (University at Buffalo) and Ziming Zhao (University at Buffalo)

Read More