Xiaoguang Li (Xidian University, Purdue University), Zitao Li (Alibaba Group (U.S.) Inc.), Ninghui Li (Purdue University), Wenhai Sun (Purdue University, West Lafayette, USA)

Recent studies reveal that local differential privacy (LDP) protocols are vulnerable to data poisoning attacks where an attacker can manipulate the final estimate on the server by leveraging the characteristics of LDP and sending carefully crafted data from a small fraction of controlled local clients. This vulnerability raises concerns regarding the robustness and reliability of LDP in hostile environments.

In this paper, we conduct a systematic investigation of the robustness of state-of-the-art LDP protocols for numerical attributes, i.e., categorical frequency oracles (CFOs) with binning and consistency, and distribution reconstruction. We evaluate protocol robustness through an attack-driven approach and propose new metrics for cross-protocol attack gain measurement. The results indicate that Square Wave and CFO-based protocols in the textit{Server} setting are more robust against the attack compared to the CFO-based protocols in the textit{User} setting. Our evaluation also unfolds new relationships between LDP security and its inherent design choices. We found that the hash domain size in local-hashing-based LDP has a profound impact on protocol robustness beyond the well-known effect on utility. Further, we propose a textit{zero-shot attack detection} by leveraging the rich reconstructed distribution information. The experiment show that our detection significantly improves the existing methods and effectively identifies data manipulation in challenging scenarios.

View More Papers

Safety Misalignment Against Large Language Models

Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University)

Read More

CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian...

Kaiyuan Zhang (Purdue University), Siyuan Cheng (Purdue University), Guangyu Shen (Purdue University), Bruno Ribeiro (Purdue University), Shengwei An (Purdue University), Pin-Yu Chen (IBM Research AI), Xiangyu Zhang (Purdue University), Ninghui Li (Purdue University)

Read More

No Source Code? No Problem! Twenty Years of Research...

Jack W. Davidson, Professor of Computer Science in the School of Engineering and Applied Science, University of Virginia

Read More

Try to Poison My Deep Learning Data? Nowhere to...

Yansong Gao (The University of Western Australia), Huaibing Peng (Nanjing University of Science and Technology), Hua Ma (CSIRO's Data61), Zhi Zhang (The University of Western Australia), Shuo Wang (Shanghai Jiao Tong University), Rayne Holland (CSIRO's Data61), Anmin Fu (Nanjing University of Science and Technology), Minhui Xue (CSIRO's Data61), Derek Abbott (The University of Adelaide, Australia)

Read More