Viet Quoc Vo (The University of Adelaide), Ehsan Abbasnejad (The University of Adelaide), Damith C. Ranasinghe (University of Adelaide)

Machine learning models are critically susceptible to evasion attacks from adversarial examples. Generally, adversarial examples—modified inputs deceptively similar to the original input—are constructed under whitebox access settings by adversaries with full access to the model. However, recent attacks have shown a remarkable reduction in the number of queries to craft adversarial examples using blackbox attacks. Particularly alarming is the now, practical, ability to exploit simply the classification decision (hard-label only) from a trained model’s access interface provided by a growing number of Machine Learning as a Service (MLaaS) providers—including Google, Microsoft, IBM—and used by a plethora of applications incorporating these models. An adversary’s ability to exploit only the predicted hard-label from a model query to craft adversarial examples is distinguished as a decision-based attack.

In our study, we first deep-dive into recent state-of-the-art decision-based attacks in ICLR and S&P to highlight the costly nature of discovering low distortion adversarial examples employing approximate gradient estimation methods. We develop a robust class of query efficient attacks capable of avoiding entrapment in a local minimum and misdirection from noisy gradients seen in gradient estimation methods. The attack method we propose, RamBoAttack, exploits the notion of Randomized Block Coordinate Descent to explore the hidden classifier manifold, targeting perturbations to manipulate only localized input features to address the issues of gradient estimation methods. Importantly, the RamBoAttack is demonstrably more robust to the different sample inputs available to an adversary and/or the targeted class. Overall, for a given target class, RamBoAttack is demonstrated to be more robust at achieving a lower distortion and higher attack success rate within a given query budget. We curate our results using the large-scale high-resolution ImageNet dataset and open-source our attack, test samples and artifacts.

View More Papers

Fine-Grained Coverage-Based Fuzzing

Bernard Nongpoh (Université Paris Saclay), Marwan Nour (Université Paris Saclay), Michaël Marcozzi (Université Paris Saclay), Sébastien Bardin (Université Paris Saclay)

Read More

Speeding Dumbo: Pushing Asynchronous BFT Closer to Practice

Bingyong Guo (Institute of Software, Chinese Academy of Sciences), Yuan Lu (Institute of Software Chinese Academy of Sciences), Zhenliang Lu (The University of Sydney), Qiang Tang (The University of Sydney), jing xu (Institute of Software, Chinese Academy of Sciences), Zhenfeng Zhang (Institute of Software, Chinese Academy of Sciences)

Read More

HeadStart: Efficiently Verifiable and Low-Latency Participatory Randomness Generation at...

Hsun Lee (National Taiwan University), Yuming Hsu (National Taiwan University), Jing-Jie Wang (National Taiwan University), Hao Cheng Yang (National Taiwan University), Yu-Heng Chen (National Taiwan University), Yih-Chun Hu (University of Illinois at Urbana-Champaign), Hsu-Chun Hsiao (National Taiwan University)

Read More