Dennis Jacob, Chong Xiang, Prateek Mittal (Princeton University)

The advent of deep learning has brought about vast improvements to computer vision systems and facilitated the development of self-driving vehicles. Nevertheless, these models have been found to be susceptible to adversarial attacks. Of particular importance to the research community are patch attacks, which have been found to be realizable in the physical world. While certifiable defenses against patch attacks have been developed for tasks such as single-label classification, there does not exist a defense for multi-label classification. In this work, we propose such a defense called Multi-Label PatchCleanser, an extension of the current state-of-the-art (SOTA) method for single-label classification. We find that our approach can achieve non-trivial robustness on the MSCOCO 2014 validation dataset while maintaining high clean performance. Additionally, we leverage a key constraint between patch and object locations to develop a novel procedure and improve upon baseline robust performance.

View More Papers

Compromising Industrial Processes using Web-Based Programmable Logic Controller Malware

Ryan Pickren (Georgia Institute of Technology), Tohid Shekari (Georgia Institute of Technology), Saman Zonouz (Georgia Institute of Technology), Raheem Beyah (Georgia Institute of Technology)

Read More

EMMasker: EM Obfuscation Against Website Fingerprinting

Mohammed Aldeen, Sisheng Liang, Zhenkai Zhang, Linke Guo (Clemson University), Zheng Song (University of Michigan – Dearborn), and Long Cheng (Clemson University)

Read More

WIP: Modeling and Detecting Falsified Vehicle Trajectories Under Data...

Jun Ying, Yiheng Feng (Purdue University), Qi Alfred Chen (University of California, Irvine), Z. Morley Mao (University of Michigan and Google)

Read More

Attributions for ML-based ICS Anomaly Detection: From Theory to...

Clement Fung (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University)

Read More