Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

TrustSketch: Trustworthy Sketch-based Telemetry on Cloud Hosts

Zhuo Cheng (Carnegie Mellon University), Maria Apostolaki (Princeton University), Zaoxing Liu (University of Maryland), Vyas Sekar (Carnegie Mellon University)

Read More

Towards generic backward-compatible software upgrades for COSPAS-SARSAT EPIRB 406...

Ahsan Saleem (University of Jyväskylä, Finland), Andrei Costin (University of Jyväskylä, Finland), Hannu Turtiainen (University of Jyväskylä, Finland), Timo Hämäläinen (University of Jyväskylä, Finland)

Read More

Cyclops: Binding a Vehicle’s Digital Identity to its Physical...

Lewis William Koplon, Ameer Ghasem Nessaee, Alex Choi (University of Arizona, Tucson), Andres Mentoza (New Mexico State University, Las Cruces), Michael Villasana, Loukas Lazos, Ming Li (University of Arizona, Tucson)

Read More

Analysis of the Effect of the Difference between Japanese...

Rei Yamagishi, Shinya Sasa, and Shota Fujii (Hitachi, Ltd.)

Read More