Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

Benchmarking transferable adversarial attacks

Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

Read More

A Cross-Verification Approach with Publicly Available Map for Detecting...

Takami Sato, Ningfei Wang (University of California, Irvine), Yueqiang Cheng (NIO Security Research), Qi Alfred Chen (University of California, Irvine)

Read More

Private Aggregate Queries to Untrusted Databases

Syed Mahbub Hafiz (University of California, Davis), Chitrabhanu Gupta (University of California, Davis), Warren Wnuck (University of California, Davis), Brijesh Vora (University of California, Davis), Chen-Nee Chuah (University of California, Davis)

Read More

Unus pro omnibus: Multi-Client Searchable Encryption via Access Control

Jiafan Wang (Data61, CSIRO), Sherman S. M. Chow (The Chinese University of Hong Kong)

Read More