Xinqian Wang (RMIT University), Xiaoning Liu (RMIT University), Shangqi Lai (CSIRO Data61), Xun Yi (RMIT University), Xingliang Yuan (University of Melbourne)

Secure inference is designed to enable encrypted machine learning model prediction over encrypted data. It will ease privacy concerns when models are deployed in Machine Learning as a Service (MLaaS). For efficiency, most of recent secure inference protocols are constructed using secure multi-party computation (MPC) techniques. They can ensure that MLaaS computes inference without knowing the inputs of users and model owners. However, MPC-based protocols do not hide information revealed from their output. In the context of secure inference, prediction outputs (i.e., inference results of encrypted user inputs) are revealed to the users. As a result, adversaries can compromise textbf{output privacy} of secure inference, i.e., launching Membership Inference Attacks (MIAs) by querying encrypted models, just like MIAs in plaintext inference.

We observe that MPC-based secure inference often yields perturbed predictions due to approximations of nonlinear functions like softmax compared to its plaintext version on identical user inputs. Thus, we evaluate whether or not MIAs can still exploit such perturbed predictions on known secure inference protocols. Our results show that secure inference remains vulnerable to MIAs. The adversary can steal membership information with high successful rates comparable to plaintext MIAs.

To tackle this open challenge, we propose textbf{SIGuard}, a framework to guard the output privacy of secure inference from being exploited by MIAs. textbf{SIGuard}'s protocol can seamlessly be integrated into existing MPC-based secure inference protocols without intruding on their computation. It proceeds with encrypted predictions outputted from secure inference, and then crafts noise for perturbing encrypted predictions without compromising inference accuracy; only the perturbed predictions are revealed to users at the end of protocol execution. textbf{SIGuard} achieves stringent privacy guarantees via a co-design of MPC techniques and machine learning. We further conduct comprehensive evaluations to find the optimal hyper-parameters for balanced efficiency and defense effectiveness against MIAs. Together, our evaluation shows textbf{SIGuard} effectively defends against MIAs by reducing the attack accuracy to be around the random guess with overhead (1.1s), occupying ~24.8% of secure inference (3.29s) on widely used ResNet34 over CIFAR-10.

View More Papers

Secure IP Address Allocation at Cloud Scale

Eric Pauley (University of Wisconsin–Madison), Kyle Domico (University of Wisconsin–Madison), Blaine Hoak (University of Wisconsin–Madison), Ryan Sheatsley (University of Wisconsin–Madison), Quinn Burke (University of Wisconsin–Madison), Yohan Beugin (University of Wisconsin–Madison), Engin Kirda (Northeastern University), Patrick McDaniel (University of Wisconsin–Madison)

Read More

VeriBin: Adaptive Verification of Patches at the Binary Level

Hongwei Wu (Purdue University), Jianliang Wu (Simon Fraser University), Ruoyu Wu (Purdue University), Ayushi Sharma (Purdue University), Aravind Machiry (Purdue University), Antonio Bianchi (Purdue University)

Read More

MTZK: Testing and Exploring Bugs in Zero-Knowledge (ZK) Compilers

Dongwei Xiao (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yiteng Peng (The Hong Kong University of Science and Technology), Shuai Wang (The Hong Kong University of Science and Technology)

Read More

The (Un)usual Suspects – Studying Reasons for Lacking Updates...

Maria Hellenthal (CISPA Helmholtz Center for Information Security), Lena Gotsche (CISPA Helmholtz Center for Information Security), Rafael Mrowczynski (CISPA Helmholtz Center for Information Security), Sarah Kugel (Saarland University), Michael Schilling (CISPA Helmholtz Center for Information Security), Ben Stock (CISPA Helmholtz Center for Information Security)

Read More