Phillip Rieger (Technical University of Darmstadt), Torsten Krauß (University of Würzburg), Markus Miettinen (Technical University of Darmstadt), Alexandra Dmitrienko (University of Würzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Federated Learning (FL) is a promising approach enabling multiple clients to train Deep Neural Networks (DNNs) collaboratively without sharing their local training data. However, FL is susceptible to backdoor (or targeted poisoning) attacks. These attacks are initiated by malicious clients who seek to compromise the learning process by introducing specific behaviors into the learned model that can be triggered by carefully crafted inputs. Existing FL safeguards have various limitations: They are restricted to specific data distributions or reduce the global model accuracy due to excluding benign models or adding noise, are vulnerable to adaptive defense-aware adversaries, or require the server to access local models, allowing data inference attacks.

This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in FL and overcomes the deficiencies of existing techniques. It leverages clients' feedback on individual models, analyzes the behavior of neurons in hidden layers, and eliminates poisoned models through an iterative pruning scheme. CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback. The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios, including IID and non-IID data distributions. Additionally, CrowdGuard withstands adaptive adversaries while preserving the original performance of protected models. To ensure confidentiality, CrowdGuard uses a secure and privacy-preserving architecture leveraging Trusted Execution Environments (TEEs) on both client and server sides.

View More Papers

BreakSPF: How Shared Infrastructures Magnify SPF Vulnerabilities Across the...

Chuhan Wang (Tsinghua University), Yasuhiro Kuranaga (Tsinghua University), Yihang Wang (Tsinghua University), Mingming Zhang (Zhongguancun Laboratory), Linkai Zheng (Tsinghua University), Xiang Li (Tsinghua University), Jianjun Chen (Tsinghua University; Zhongguancun Laboratory), Haixin Duan (Tsinghua University; Quan Cheng Lab; Zhongguancun Laboratory), Yanzhong Lin (Coremail Technology Co. Ltd), Qingfeng Pan (Coremail Technology Co. Ltd)

Read More

Front-running Attack in Sharded Blockchains and Fair Cross-shard Consensus

Jianting Zhang (Purdue University), Wuhui Chen (Sun Yat-sen University), Sifu Luo (Sun Yat-sen University), Tiantian Gong (Purdue University), Zicong Hong (The Hong Kong Polytechnic University), Aniket Kate (Purdue University)

Read More

Group-based Robustness: A General Framework for Customized Robustness in...

Weiran Lin (Carnegie Mellon University), Keane Lucas (Carnegie Mellon University), Neo Eyal (Tel Aviv University), Lujo Bauer (Carnegie Mellon University), Michael K. Reiter (Duke University), Mahmood Sharif (Tel Aviv University)

Read More

EM Eye: Characterizing Electromagnetic Side-channel Eavesdropping on Embedded Cameras

Yan Long (University of Michigan), Qinhong Jiang (Zhejiang University), Chen Yan (Zhejiang University), Tobias Alam (University of Michigan), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University), Kevin Fu (Northeastern University)

Read More