Hossein Fereidooni (Technical University of Darmstadt), Alessandro Pegoraro (Technical University of Darmstadt), Phillip Rieger (Technical University of Darmstadt), Alexandra Dmitrienko (University of Wuerzburg), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Federated learning (FL) is a collaborative learning paradigm allowing multiple clients to jointly train a model without sharing their training data. However, FL is susceptible to poisoning attacks, in which the adversary injects manipulated model updates into the federated model aggregation process to corrupt or destroy predictions (untargeted poisoning) or implant hidden functionalities (targeted poisoning or backdoors). Existing defenses against poisoning attacks in FL have several limitations, such as relying on specific assumptions about attack types and strategies or data distributions or not sufficiently robust against advanced injection techniques and strategies and simultaneously maintaining the utility of the aggregated model.

To address the deficiencies of existing defenses, we take a generic and completely different approach to detect poisoning (targeted and untargeted) attacks. We present FreqFed, a novel aggregation mechanism that transforms the model updates (i.e., weights) into the frequency domain, where we can identify the core frequency components that inherit sufficient information about weights. This allows us to effectively filter out malicious updates during local training on the clients, regardless of attack types, strategies, and clients' data distributions. We extensively evaluate the efficiency and effectiveness of FreqFed in different application domains, including image classification, word prediction, IoT intrusion detection, and speech recognition. We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.

View More Papers

Flow Correlation Attacks on Tor Onion Service Sessions with...

Daniela Lopes (INESC-ID / IST, Universidade de Lisboa), Jin-Dong Dong (Carnegie Mellon University), Pedro Medeiros (INESC-ID / IST, Universidade de Lisboa), Daniel Castro (INESC-ID / IST, Universidade de Lisboa), Diogo Barradas (University of Waterloo), Bernardo Portela (INESC TEC / Universidade do Porto), João Vinagre (INESC TEC / Universidade do Porto), Bernardo Ferreira (LASIGE, Faculdade de…

Read More

Attributions for ML-based ICS Anomaly Detection: From Theory to...

Clement Fung (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University)

Read More

Aligning Confidential Computing with Cloud-native ML Platforms

Angelo Ruocco, Chris Porter, Claudio Carvalho, Daniele Buono, Derren Dunn, Hubertus Franke, James Bottomley, Marcio Silva, Mengmei Ye, Niteesh Dubey, Tobin Feldman-Fitzthum (IBM Research)

Read More

From Hardware Fingerprint to Access Token: Enhancing the Authentication...

Yue Xiao (Wuhan University), Yi He (Tsinghua University), Xiaoli Zhang (Zhejiang University of Technology), Qian Wang (Wuhan University), Renjie Xie (Tsinghua University), Kun Sun (George Mason University), Ke Xu (Tsinghua University), Qi Li (Tsinghua University)

Read More