Dennis Jacob, Chong Xiang, Prateek Mittal (Princeton University)

The advent of deep learning has brought about vast improvements to computer vision systems and facilitated the development of self-driving vehicles. Nevertheless, these models have been found to be susceptible to adversarial attacks. Of particular importance to the research community are patch attacks, which have been found to be realizable in the physical world. While certifiable defenses against patch attacks have been developed for tasks such as single-label classification, there does not exist a defense for multi-label classification. In this work, we propose such a defense called Multi-Label PatchCleanser, an extension of the current state-of-the-art (SOTA) method for single-label classification. We find that our approach can achieve non-trivial robustness on the MSCOCO 2014 validation dataset while maintaining high clean performance. Additionally, we leverage a key constraint between patch and object locations to develop a novel procedure and improve upon baseline robust performance.

View More Papers

“I used to live in Florida”: Exploring the Impact...

Imani N. S. Munyaka (University of California, San Diego), Daniel A Delgado, Juan Gilbert, Jaime Ruiz, Patrick Traynor (University of Florida)

Read More

QUACK: Hindering Deserialization Attacks via Static Duck Typing

Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Read More

Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated...

Torsten Krauß (University of Würzburg), Jan König (University of Würzburg), Alexandra Dmitrienko (University of Wuerzburg), Christian Kanzow (University of Würzburg)

Read More

WIP: Hidden Hub Eavesdropping Attack in Matter-enabled Smart Home...

Song Liao, Jingwen Yan, Long Cheng (Clemson University)

Read More