Cheng Zhang (Hunan University), Yang Xu (Hunan University), Jianghao Tan (Hunan University), Jiajie An (Hunan University), Wenqiang Jin (Hunan University)

Clustered federated learning (CFL) serves as a promising framework to address the challenges of non-IID (non-Independent and Identically Distributed) data and heterogeneity in federated learning. It involves grouping clients into clusters based on the similarity of their data distributions or model updates. However, classic CFL frameworks pose severe threats to clients' privacy since the honest-but-curious server can easily know the bias of clients' data distributions (its preferences). In this work, we propose a privacy-enhanced clustered federated learning framework, MingledPie, aiming to resist against servers' preference profiling capabilities by allowing clients to be grouped into multiple clusters spontaneously. Specifically, within a given cluster, we mingled two types of clients in which a major type of clients share similar data distributions while a small portion of them do not (false positive clients). Such that, the CFL server fails to link clients' data preferences based on their belonged cluster categories. To achieve this, we design an indistinguishable cluster identity generation approach to enable clients to form clusters with a certain proportion of false positive members without the assistance of a CFL server. Meanwhile, training with mingled false positive clients will inevitably degrade the performances of the cluster's global model. To rebuild an accurate cluster model, we represent the mingled cluster models as a system of linear equations consisting of the accurate models and solve it. Rigid theoretical analyses are conducted to evaluate the usability and security of the proposed designs. In addition, extensive evaluations of MingledPie on six open-sourced datasets show that it defends against preference profiling attacks with an accuracy of 69.4% on average. Besides, the model accuracy loss is limited to between 0.02% and 3.00%.

View More Papers

dAngr: Lifting Software Debugging to a Symbolic Level

Dairo de Ruck, Jef Jacobs, Jorn Lapon, Vincent Naessens (DistriNet, KU Leuven, 3001 Leuven, Belgium)

Read More

CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models

Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University)

Read More

SKILLPoV: Towards Accessible and Effective Privacy Notice for Amazon...

Jingwen Yan (Clemson University), Song Liao (Texas Tech University), Mohammed Aldeen (Clemson University), Luyi Xing (Indiana University Bloomington), Danfeng (Daphne) Yao (Virginia Tech), Long Cheng (Clemson University)

Read More

The Power of Words: A Comprehensive Analysis of Rationales...

Yusra Elbitar (CISPA Helmholtz Center for Information Security), Alexander Hart (CISPA Helmholtz Center for Information Security), Sven Bugiel (CISPA Helmholtz Center for Information Security)

Read More