Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

MirageFlow: A New Bandwidth Inflation Attack on Tor

Christoph Sendner (University of Würzburg), Jasper Stang (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Raveen Wijewickrama (University of Texas at San Antonio), Murtuza Jadliwala (University of Texas at San Antonio)

Read More

Securing EV charging system against Physical-layer Signal Injection Attack...

Soyeon Son (Korea University) Kyungho Joo (Korea University) Wonsuk Choi (Korea University) Dong Hoon Lee (Korea University)

Read More

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

Chengkun Wei (Zhejiang University), Wenlong Meng (Zhejiang University), Zhikun Zhang (CISPA Helmholtz Center for Information Security and Stanford University), Min Chen (CISPA Helmholtz Center for Information Security), Minghu Zhao (Zhejiang University), Wenjing Fang (Ant Group), Lei Wang (Ant Group), Zihui Zhang (Zhejiang University), Wenzhi Chen (Zhejiang University)

Read More