Tianyue Chu, Devriş İşler (IMDEA Networks Institute & Universidad Carlos III de Madrid), Nikolaos Laoutaris (IMDEA Networks Institute)

Federated Learning (FL) has evolved into a pivotal paradigm for collaborative machine learning, enabling a centralised server to compute a global model by aggregating the local models trained by clients. However, the distributed nature of FL renders it susceptible to poisoning attacks that exploit its linear aggregation rule called FEDAVG. To address this vulnerability, FEDQV has been recently introduced as a superior alternative to FEDAVG, specifically designed to mitigate poisoning attacks by taxing more than linearly deviating clients. Nevertheless, FEDQV remains exposed to privacy attacks that aim to infer private information from clients’ local models. To counteract such privacy threats, a well-known approach is to use a Secure Aggregation (SA) protocol to ensure that the server is unable to inspect individual trained models as it aggregates them. In this work, we show how to implement SA on top of FEDQV in order to address both poisoning and privacy attacks. We mount several privacy attacks against FEDQV and demonstrate the effectiveness of SA in countering them.

View More Papers

Cyclops: Binding a Vehicle’s Digital Identity to its Physical...

Lewis William Koplon, Ameer Ghasem Nessaee, Alex Choi (University of Arizona, Tucson), Andres Mentoza (New Mexico State University, Las Cruces), Michael Villasana, Loukas Lazos, Ming Li (University of Arizona, Tucson)

Read More

WIP: Adversarial Object-Evasion Attack Detection in Autonomous Driving Contexts:...

Rao Li (The Pennsylvania State University), Shih-Chieh Dai (Pennsylvania State University), Aiping Xiong (Penn State University)

Read More

CamPro: Camera-based Anti-Facial Recognition

Wenjun Zhu (Zhejiang University), Yuan Sun (Zhejiang University), Jiani Liu (Zhejiang University), Yushi Cheng (Zhejiang University), Xiaoyu Ji (Zhejiang University), Wenyuan Xu (Zhejiang University)

Read More