Chunyi Zhou (Nanjing University of Science and Technology), Yansong Gao (Nanjing University of Science and Technology), Anmin Fu (Nanjing University of Science and Technology), Kai Chen (Chinese Academy of Science), Zhiyang Dai (Nanjing University of Science and Technology), Zhi Zhang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Yuqing Zhang (University of Chinese Academy of Science)

Federated learning (FL) trains a global model across a number of decentralized users, each with a local dataset. Compared to traditional centralized learning, FL does not require direct access to local datasets and thus aims to mitigate data privacy concerns. However, data privacy leakage in FL still exists due to inference attacks, including membership inference, property inference, and data inversion.

In this work, we propose a new type of privacy inference attack, coined Preference Profiling Attack (PPA), that accurately profiles the private preferences of a local user, e.g., most liked (disliked) items from the client's online shopping and most common expressions from the user's selfies. In general, PPA can profile top-$k$ (i.e., $k$ = $1, 2, 3$ and $k = 1$ in particular) preferences contingent on the local client (user)'s characteristics. Our key insight is that the gradient variation of a local user's model has a distinguishable sensitivity to the sample proportion of a given class, especially the majority (minority) class. By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus textit{the user's preference of the class} is exposed. The inherent statistical heterogeneity of FL further facilitates PPA. We have extensively evaluated the PPA's effectiveness using four datasets (MNIST, CIFAR10, RAF-DB and Products-10K). Our results show that PPA achieves 90% and 98% top-$1$ attack accuracy to the MNIST and CIFAR10, respectively. More importantly, in real-world commercial scenarios of shopping (i.e., Products-10K) and social network (i.e., RAF-DB), PPA gains a top-$1$ attack accuracy of 78% in the former case to infer the most ordered items (i.e., as a commercial competitor), and 88% in the latter case to infer a victim user's most often facial expressions, e.g., disgusted. The top-$3$ attack accuracy and top-$2$ accuracy is up to 88% and 100% for the Products-10K and RAF-DB, respectively. We also show that PPA is insensitive to the number of FL's local users (up to 100 we tested) and local training epochs (up to 20 we tested) used by a user. Although existing countermeasures such as dropout and differential privacy protection can lower the PPA's accuracy to some extent, they unavoidably incur notable deterioration to the global model. The source code is available at https://github.com/PPAattack.

View More Papers

Tag of the Dead: How Terminated SaaS Tags Become...

Takahito Sakamoto, Takuya Murozono (DataSign Inc)

Read More

Reminding Drivers of the Stalking Vehicles on the Road

Wei Sun, Kannan Srinivsan (The Ohio State University)

Read More

SoundLock: A Novel User Authentication Scheme for VR Devices...

Huadi Zhu (The University of Texas at Arlington), Mingyan Xiao (The University of Texas at Arlington), Demoria Sherman (The University of Texas at Arlington), Ming Li (The University of Texas at Arlington)

Read More

Death By A Thousand COTS: Disrupting Satellite Communications using...

Frederick Rawlins, Richard Baker and Ivan Martinovic (University of Oxford) Presenter: Frederick Rawlins

Read More