Xuanqi Liu (Tsinghua University), Zhuotao Liu (Tsinghua University), Qi Li (Tsinghua University), Ke Xu (Tsinghua University), Mingwei Xu (Tsinghua University)

The escalating focus on data privacy poses significant challenges for collaborative neural network training, where data ownership and model training/deployment responsibilities reside with distinct entities. Our community has made substantial contributions to addressing this challenge, proposing various approaches such as federated learning (FL) and privacy-preserving machine learning based on cryptographic constructs like homomorphic encryption (HE) and secure multiparty computation (MPC). However, FL completely overlooks model privacy, and HE has limited extensibility (confined to only one data provider). While the state-of-the-art MPC frameworks provide reasonable throughput and simultaneously ensure model/data privacy, they rely on a critical non-colluding assumption on the computing servers, and relaxing this assumption is still an open problem.

In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption. Our fundamental design principle is to construct the n-party collaborative training protocol based on an efficient two-party protocol, and meanwhile ensuring that switching to different data providers during model training introduces no extra cost. We introduce several novel cryptographic protocols to realize this design principle and conduct a rigorous security and privacy analysis. Our comprehensive evaluations of Pencil demonstrate that (i) models trained in plaintext and models trained privately using Pencil exhibit nearly identical test accuracies; (ii) The training overhead of Pencil is greatly reduced: Pencil achieves 10 ∼ 260× higher throughput and 2 orders of magnitude less communication than prior art; (iii) Pencil is resilient against both existing and adaptive (white-box) attacks.

View More Papers

You Can Use But Cannot Recognize: Preserving Visual Privacy...

Qiushi Li (Tsinghua University), Yan Zhang (Tsinghua University), Ju Ren (Tsinghua University), Qi Li (Tsinghua University), Yaoxue Zhang (Tsinghua University)

Read More

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural...

Gorka Abad (Radboud University & Ikerlan Technology Research Centre), Oguzhan Ersoy (Radboud University), Stjepan Picek (Radboud University & Delft University of Technology), Aitor Urbieta (Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA))

Read More

AdvCAPTCHA: Creating Usable and Secure Audio CAPTCHA with Adversarial...

Hao-Ping (Hank) Lee (Carnegie Mellon University), Wei-Lun Kao (National Taiwan University), Hung-Jui Wang (National Taiwan University), Ruei-Che Chang (University of Michigan), Yi-Hao Peng (Carnegie Mellon University), Fu-Yin Cherng (National Chung Cheng University), Shang-Tse Chen (National Taiwan University)

Read More

CAGE: Complementing Arm CCA with GPU Extensions

Chenxu Wang (Southern University of Science and Technology (SUSTech) and The Hong Kong Polytechnic University), Fengwei Zhang (Southern University of Science and Technology (SUSTech)), Yunjie Deng (Southern University of Science and Technology (SUSTech)), Kevin Leach (Vanderbilt University), Jiannong Cao (The Hong Kong Polytechnic University), Zhenyu Ning (Hunan University), Shoumeng Yan (Ant Group), Zhengyu He (Ant…

Read More