Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Federated Learning (FL) has evolved into a pivotal paradigm for collaborative machine learning, enabling a centralised server to compute a global model by aggregating the local models trained by clients. However, the distributed nature of FL renders it susceptible to poisoning attacks that exploit its linear aggregation rule called FEDAVG. To address this vulnerability, FEDQV has been recently introduced as a superior alternative to FEDAVG, specifically designed to mitigate poisoning attacks by taxing more than linearly deviating clients. Nevertheless, FEDQV remains exposed to privacy attacks that aim to infer private information from clients’ local models. To counteract such privacy threats, a well-known approach is to use a Secure Aggregation (SA) protocol to ensure that the server is unable to inspect individual trained models as it aggregates them. In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.

View More Papers

IRRedicator: Pruning IRR with RPKI-Valid BGP Insights

Minhyeok Kang (Seoul National University), Weitong Li (Virginia Tech), Roland van Rijswijk-Deij (University of Twente), Ted "Taekyoung" Kwon (Seoul National University), Taejoong Chung (Virginia Tech)

Read More

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks...

Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Read More

Compromising Industrial Processes using Web-Based Programmable Logic Controller Malware

Ryan Pickren (Georgia Institute of Technology), Tohid Shekari (Georgia Institute of Technology), Saman Zonouz (Georgia Institute of Technology), Raheem Beyah (Georgia Institute of Technology)

Read More