Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia)

Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and code repositories that can greatly facilitate the development of ML models. Today, even ordinary data holders who are not ML experts can apply off-the-shelf codebase to build high-performance ML models on their data, many of which are sensitive in nature (e.g., clinical records).

In this work, we consider a malicious ML provider who supplies model-training code to the data holders, does not have access to the training process, and has only black-box query access to the resulting model. In this setting, we demonstrate a new form of membership inference attack that is strictly more powerful than prior art. Our attack empowers the adversary to reliably de-identify all the training samples (average >99% attack [email protected]% FPR), and the compromised models still maintain competitive performance as their uncorrupted counterparts (average <1% accuracy drop). Moreover, we show that the poisoned models can effectively disguise the amplified membership leakage under common membership privacy auditing, which can only be revealed by a set of secret samples known by the adversary. Overall, our study not only points to the worst-case membership privacy leakage, but also unveils a common pitfall underlying existing privacy auditing methods, which calls for future efforts to rethink the current practice of auditing membership privacy in machine learning models.

View More Papers

Wallbleed: A Memory Disclosure Vulnerability in the Great Firewall...

Shencha Fan (GFW Report), Jackson Sippe (University of Colorado Boulder), Sakamoto San (Shinonome Lab), Jade Sheffey (UMass Amherst), David Fifield (None), Amir Houmansadr (UMass Amherst), Elson Wedwards (None), Eric Wustrow (University of Colorado Boulder)

Read More

Automatic Insecurity: Exploring Email Auto-configuration in the Wild

Shushang Wen (School of Cyber Science and Technology, University of Science and Technology of China), Yiming Zhang (Tsinghua University), Yuxiang Shen (School of Cyber Science and Technology, University of Science and Technology of China), Bingyu Li (School of Cyber Science and Technology, Beihang University), Haixin Duan (Tsinghua University; Zhongguancun Laboratory), Jingqiang Lin (School of Cyber…

Read More

Detecting Ransomware Despite I/O Overhead: A Practical Multi-Staged Approach

Christian van Sloun (RWTH Aachen University), Vincent Woeste (RWTH Aachen University), Konrad Wolsing (RWTH Aachen University & Fraunhofer FKIE), Jan Pennekamp (RWTH Aachen University), Klaus Wehrle (RWTH Aachen University)

Read More

Towards Establishing a Systematic Security Framework for Next Generation...

Tolga O. Atalay (A2 Labs LLC), Tianyuan Yu (UCLA), Lixia Zhang (UCLA), Angelos Stavrou (A2 Labs LLC)

Read More