Guy Amit (Ben-Gurion University), Moshe Levy (Ben-Gurion University), Yisroel Mirsky (Ben-Gurion University)

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.

We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.

View More Papers

WIP: A Trust Assessment Method for In-Vehicular Networks using...

Artur Hermann, Natasa Trkulja (Ulm University - Institute of Distributed Systems), Anderson Ramon Ferraz de Lucena, Alexander Kiening (DENSO AUTOMOTIVE Deutschland GmbH), Ana Petrovska (Huawei Technologies), Frank Kargl (Ulm University - Institute of Distributed Systems)

Read More

Connecting the Dots in the Sky: Website Fingerprinting in...

Prabhjot Singh (University of Waterloo), Diogo Barradas (University of Waterloo), Tariq Elahi (University of Edinburgh), Noura Limam (University of Waterloo)

Read More

Large Language Model guided Protocol Fuzzing

Ruijie Meng (National University of Singapore, Singapore), Martin Mirchev (National University of Singapore), Marcel Böhme (MPI-SP, Germany and Monash University, Australia), Abhik Roychoudhury (National University of Singapore)

Read More

Vision: “AccessFormer”: Feedback-Driven Access Control Policy

Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello (University of Auckland)

Read More