Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai…

Split learning is privacy-preserving distributed learning that has gained momentum recently. It also faces new security challenges. FSHA is a serious threat to split learning. In FSHA, a malicious server hijacks training to trick clients to train the encoder of an autoencoder instead of a classification model. Intermediate results sent to the server by a client are actually latent codes of private training samples, which can be reconstructed with high fidelity from the received codes with the decoder of the autoencoder. SplitGuard is the only existing effective defense against hijacking attacks. It is an active method that injects falsely labeled data to incur abnormal behaviors to detect hijacking attacks. Such injection also incurs an adverse impact on honest training of intended models.

In this paper, we first show that SplitGuard is vulnerable to an adaptive hijacking attack named SplitSpy. SplitSpy exploits the same property that SplitGuard exploits to detect hijacking attacks. In SplitSpy, a malicious server maintains a shadow model that performs the intended task to detect falsely labeled data and evade SplitGuard. Our experimental evaluation indicates that SplitSpy can effectively evade SplitGuard. Then we propose a novel passive detection method, named Gradients Scrutinizer, which relies on intrinsic differences between gradients from an intended model and those from a malicious model: the expected similarity among gradients of same-label samples differs from the expected similarity among gradients of different-label samples for an intended model, while they are the same for a malicious model. This intrinsic distinguishability enables Gradients Scrutinizer to effectively detect split-learning hijacking attacks without tampering with honest training of intended models. Our extensive evaluation indicates that Gradients Scrutinizer can effectively thwart both known split-learning hijacking attacks and adaptive counterattacks against it.

View More Papers

The “Beatrix” Resurrections: Robust Backdoor Detection via Gram Matrices

Wanlun Ma (Swinburne University of Technology), Derui Wang (CSIRO’s Data61), Ruoxi Sun (The University of Adelaide & CSIRO's Data61), Minhui Xue (CSIRO's Data61), Sheng Wen (Swinburne University of Technology), Yang Xiang (Digital Research & Innovation Capability Platform, Swinburne University of Technology)

Read More

Towards Automatic and Precise Heap Layout Manipulation for General-Purpose...

Runhao Li (National University of Defense Technology), Bin Zhang (National University of Defense Technology), Jiongyi Chen (National University of Defense Technology), Wenfeng Lin (National University of Defense Technology), Chao Feng (National University of Defense Technology), Chaojing Tang (National University of Defense Technology)

Read More

The evolution of program analysis approaches in the era...

Alex Matrosov (CEO and Founder of Binarly Inc.)

Read More

Smarter Contracts: Detecting Vulnerabilities in Smart Contracts with Deep...

Christoph Sendner (University of Wuerzburg), Huili Chen (University of California San Diego), Hossein Fereidooni (Technische Universität Darmstadt), Lukas Petzi (University of Wuerzburg), Jan König (University of Wuerzburg), Jasper Stang (University of Wuerzburg), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt), Farinaz Koushanfar (University of California San Diego)

Read More