Jiayun Fu (Huazhong University of Science and Technology), Xiaojing Ma (Huazhong University of Science and Technology), Bin B. Zhu (Microsoft Research Asia), Pingyi Hu (Huazhong University of Science and Technology), Ruixin Zhao (Huazhong University of Science and Technology), Yaru Jia (Huazhong University of Science and Technology), Peng Xu (Huazhong University of Science and Technology), Hai…

Split learning is privacy-preserving distributed learning that has gained momentum recently. It also faces new security challenges. FSHA is a serious threat to split learning. In FSHA, a malicious server hijacks training to trick clients to train the encoder of an autoencoder instead of a classification model. Intermediate results sent to the server by a client are actually latent codes of private training samples, which can be reconstructed with high fidelity from the received codes with the decoder of the autoencoder. SplitGuard is the only existing effective defense against hijacking attacks. It is an active method that injects falsely labeled data to incur abnormal behaviors to detect hijacking attacks. Such injection also incurs an adverse impact on honest training of intended models.

In this paper, we first show that SplitGuard is vulnerable to an adaptive hijacking attack named SplitSpy. SplitSpy exploits the same property that SplitGuard exploits to detect hijacking attacks. In SplitSpy, a malicious server maintains a shadow model that performs the intended task to detect falsely labeled data and evade SplitGuard. Our experimental evaluation indicates that SplitSpy can effectively evade SplitGuard. Then we propose a novel passive detection method, named Gradients Scrutinizer, which relies on intrinsic differences between gradients from an intended model and those from a malicious model: the expected similarity among gradients of same-label samples differs from the expected similarity among gradients of different-label samples for an intended model, while they are the same for a malicious model. This intrinsic distinguishability enables Gradients Scrutinizer to effectively detect split-learning hijacking attacks without tampering with honest training of intended models. Our extensive evaluation indicates that Gradients Scrutinizer can effectively thwart both known split-learning hijacking attacks and adaptive counterattacks against it.

View More Papers

VulHawk: Cross-architecture Vulnerability Detection with Entropy-based Binary Code Search

Zhenhao Luo (College of Computer, National University of Defense Technology), Pengfei Wang (College of Computer, National University of Defense Technology), Baosheng Wang (College of Computer, National University of Defense Technology), Yong Tang (College of Computer, National University of Defense Technology), Wei Xie (College of Computer, National University of Defense Technology), Xu Zhou (College of Computer,…

Read More

Securing Federated Sensitive Topic Classification against Poisoning Attacks

Tianyue Chu (IMDEA Networks Institute), Alvaro Garcia-Recuero (IMDEA Networks Institute), Costas Iordanou (Cyprus University of Technology), Georgios Smaragdakis (TU Delft), Nikolaos Laoutaris (IMDEA Networks Institute)

Read More

WIP: Towards the Practicality of the Adversarial Attack on...

Chen Ma (Xi'an Jiaotong University), Ningfei Wang (University of California, Irvine), Qi Alfred Chen (University of California, Irvine), Chao Shen (Xi'an Jiaotong University)

Read More

CLExtract: Recovering Highly Corrupted DVB/GSE Satellite Stream with Contrastive...

Minghao Lin (University of Colorado Boulder), Minghao Cheng (Independent Researcher), Dongsheng Luo (Florida International University), Yueqi Chen (University of Colorado Boulder) Presenter: Minghao Lin

Read More