Kaiyuan Zhang (Purdue University), Siyuan Cheng (Purdue University), Guangyu Shen (Purdue University), Bruno Ribeiro (Purdue University), Shengwei An (Purdue University), Pin-Yu Chen (IBM Research AI), Xiangyu Zhang (Purdue University), Ninghui Li (Purdue University)

Federated learning collaboratively trains a neural network on a global server, where each local client receives the current global model weights and sends back parameter updates (gradients) based on its local private data.
The process of sending these model updates may leak client's private data information.
Existing gradient inversion attacks can exploit this vulnerability to recover private training instances from a client's gradient vectors. Recently, researchers have proposed advanced gradient inversion techniques that existing defenses struggle to handle effectively.
In this work, we present a novel defense tailored for large neural network models. Our defense capitalizes on the high dimensionality of the model parameters to perturb gradients within a textit{subspace orthogonal} to the original gradient. By leveraging cold posteriors over orthogonal subspaces, our defense implements a refined gradient update mechanism. This enables the selection of an optimal gradient that not only safeguards against gradient inversion attacks but also maintains model utility.
We conduct comprehensive experiments across three different datasets and evaluate our defense against various state-of-the-art attacks and defenses.

View More Papers

The Road to Trust: Building Enclaves within Confidential VMs

Wenhao Wang (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Linke Song (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Benshan Mei (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering, CAS), Shuang Liu (Ant Group), Shijun Zhao (Key Laboratory of Cyberspace Security Defense, Institute of Information Engineering,…

Read More

IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems

Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis)

Read More

Speak Up, I’m Listening: Extracting Speech from Zero-Permission VR...

Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

Read More