Duanyi Yao (Hong Kong University of Science and Technology), Songze Li (Southeast University), Xueluan Gong (Wuhan University), Sizai Hou (Hong Kong University of Science and Technology), Gaoning Pan (Hangzhou Dianzi University)

Vertical Federated Learning (VFL) is a collaborative learning paradigm designed for scenarios where multiple clients share disjoint features of the same set of data samples. Albeit a wide range of applications, VFL is faced with privacy leakage from data reconstruction attacks. These attacks generally fall into two categories: honest-but-curious (HBC), where adversaries steal data while adhering to the protocol; and malicious attacks, where adversaries breach the training protocol for significant data leakage. While most research has focused on HBC scenarios, the exploration of malicious attacks remains limited.

Launching effective malicious attacks in VFL presents unique challenges: 1) Firstly, given the distributed nature of clients’ data features and models, each client rigorously guards its privacy and prohibits direct querying, complicating any attempts to steal data; 2) Existing malicious attacks alter the underlying VFL training task, and are hence easily detected by comparing the received gradients with the ones received in honest training. To overcome these challenges, we develop URVFL, a novel attack strategy that evades current detection mechanisms. The key idea is to integrate a discriminator with auxiliary classifier that takes a full advantage of the label information and generates malicious gradients to the victim clients: on one hand, label information helps to better characterize embeddings of samples from distinct classes, yielding an improved reconstruction performance; on the other hand, computing malicious gradients with label information better mimics the honest training, making the malicious gradients indistinguishable from the honest ones, and the attack much more stealthy. Our comprehensive experiments demonstrate that URVFL significantly outperforms existing attacks, and successfully circumvents SOTA detection methods for malicious attacks. Additional ablation studies and evaluations on defenses further underscore the robustness and effectiveness of URVFL.

View More Papers

MingledPie: A Cluster Mingling Approach for Mitigating Preference Profiling...

Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

Read More

Uncovering the iceberg from the tip: Generating API Specifications...

Miaoqian Lin (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Yi Yang (Institute of Information Engineering, Chinese Academy of…

Read More

RACONTEUR: A Knowledgeable, Insightful, and Portable LLM-Powered Shell Command...

Jiangyi Deng (Zhejiang University), Xinfeng Li (Zhejiang University), Yanjiao Chen (Zhejiang University), Yijie Bai (Zhejiang University), Haiqin Weng (Ant Group), Yan Liu (Ant Group), Tao Wei (Ant Group), Wenyuan Xu (Zhejiang University)

Read More

SCAMMAGNIFIER: Piercing the Veil of Fraudulent Shopping Website Campaigns

Marzieh Bitaab (Arizona State University), Alireza Karimi (Arizona State University), Zhuoer Lyu (Arizona State University), Adam Oest (Amazon), Dhruv Kuchhal (Amazon), Muhammad Saad (X Corp.), Gail-Joon Ahn (Arizona State University), Ruoyu Wang (Arizona State University), Tiffany Bao (Arizona State University), Yan Shoshitaishvili (Arizona State University), Adam Doupé (Arizona State University)

Read More