Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

Flow Correlation Attacks on Tor Onion Service Sessions with...

Daniela Lopes (INESC-ID / IST, Universidade de Lisboa), Jin-Dong Dong (Carnegie Mellon University), Pedro Medeiros (INESC-ID / IST, Universidade de Lisboa), Daniel Castro (INESC-ID / IST, Universidade de Lisboa), Diogo Barradas (University of Waterloo), Bernardo Portela (INESC TEC / Universidade do Porto), João Vinagre (INESC TEC / Universidade do Porto), Bernardo Ferreira (LASIGE, Faculdade de…

Read More

HEIR: A Unified Representation for Cross-Scheme Compilation of Fully...

Song Bian (Beihang University), Zian Zhao (Beihang University), Zhou Zhang (Beihang University), Ran Mao (Beihang University), Kohei Suenaga (Kyoto University), Yier Jin (University of Science and Technology of China), Zhenyu Guan (Beihang University), Jianwei Liu (Beihang University)

Read More