Dennis Jacob, Chong Xiang, Prateek Mittal (Princeton University)

The advent of deep learning has brought about vast improvements to computer vision systems and facilitated the development of self-driving vehicles. Nevertheless, these models have been found to be susceptible to adversarial attacks. Of particular importance to the research community are patch attacks, which have been found to be realizable in the physical world. While certifiable defenses against patch attacks have been developed for tasks such as single-label classification, there does not exist a defense for multi-label classification. In this work, we propose such a defense called Multi-Label PatchCleanser, an extension of the current state-of-the-art (SOTA) method for single-label classification. We find that our approach can achieve non-trivial robustness on the MSCOCO 2014 validation dataset while maintaining high clean performance. Additionally, we leverage a key constraint between patch and object locations to develop a novel procedure and improve upon baseline robust performance.

View More Papers

A Duty to Forget, a Right to be Assured?...

Hongsheng Hu (CSIRO's Data61), Shuo Wang (CSIRO's Data61), Jiamin Chang (University of New South Wales), Haonan Zhong (University of New South Wales), Ruoxi Sun (CSIRO's Data61), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61)

Read More

WIP: Adversarial Retroreflective Patches: A Novel Stealthy Attack on...

Go Tsuruoka (Waseda University), Takami Sato, Qi Alfred Chen (University of California, Irvine), Kazuki Nomoto, Ryunosuke Kobayashi, Yuna Tanaka (Waseda University), Tatsuya Mori (Waseda University/NICT/RIKEN)

Read More

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Hengzhi Pei (UIUC), Jinyuan Jia (UIUC, Penn State), Wenbo Guo (UC Berkeley, Purdue University), Bo Li (UIUC), Dawn Song (UC Berkeley)

Read More