Aydin Abadi (Newcastle University), Vishnu Asutosh Dasu (Pennsylvania State University), Sumanta Sarkar (University of Warwick)

Deduplication is a vital preprocessing step that enhances machine learning model performance and saves training time and energy. However, enhancing federated learning through deduplication poses challenges, especially regarding scalability and potential privacy violations if deduplication involves sharing all clients' data. In this paper, we address the problem of deduplication in a federated setup by introducing a pioneering protocol, Efficient Privacy-Preserving Multi-Party Deduplication (EP-MPD). It efficiently removes duplicates from multiple clients' datasets without compromising data privacy. EP-MPD is constructed in a modular fashion, utilizing two novel variants of the Private Set Intersection protocol. Our extensive experiments demonstrate the significant benefits of deduplication in federated learning of large language models. For instance, we observe up to 19.62% improvement in perplexity and up to 27.95% reduction in running time while varying the duplication level between 10% and 30%. EP-MPD effectively balances privacy and performance in federated learning, making it a valuable solution for large-scale applications.

View More Papers

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More

Cross-Origin Web Attacks via HTTP/2 Server Push and Signed...

Pinji Chen (Tsinghua University), Jianjun Chen (Tsinghua University & Zhongguancun Laboratory), Mingming Zhang (Zhongguancun Laboratory), Qi Wang (Tsinghua University), Yiming Zhang (Tsinghua University), Mingwei Xu (Tsinghua University), Haixin Duan (Tsinghua University)

Read More

Ghidra: Is Newer Always Better?

Jonathan Crussell (Sandia National Laboratories)

Read More

Black-box Membership Inference Attacks against Fine-tuned Diffusion Models

Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

Read More