Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks...

Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Read More

Facilitating Non-Intrusive In-Vivo Firmware Testing with Stateless Instrumentation

Jiameng Shi (University of Georgia), Wenqiang Li (Independent Researcher), Wenwen Wang (University of Georgia), Le Guan (University of Georgia)

Read More

WIP: Towards Practical LiDAR Spoofing Attack against Vehicles Driving...

Ryo Suzuki (Keio University), Takami Sato (University of California, Irvine), Yuki Hayakawa, Kazuma Ikeda, Ozora Sako, Rokuto Nagata (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

Exploiting Sequence Number Leakage: TCP Hijacking in NAT-Enabled Wi-Fi...

Yuxiang Yang (Tsinghua University), Xuewei Feng (Tsinghua University), Qi Li (Tsinghua University), Kun Sun (George Mason University), Ziqiang Wang (Southeast University), Ke Xu (Tsinghua University)

Read More