Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.

However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains.

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

View More Papers

Cleaning Up the Internet of Evil Things: Real-World Evidence...

Orcun Cetin (Delft University of Technology), Carlos Gañán (Delft University of Technology), Lisette Altena (Delft University of Technology), Takahiro Kasama (National Institute of Information and Communications Technology), Daisuke Inoue (National Institute of Information and Communications Technology), Kazuki Tamiya (Yokohama National University), Ying Tie (Yokohama National University), Katsunari Yoshioka (Yokohama National University), Michel van Eeten (Delft…

Read More

Enemy At the Gateways: Censorship-Resilient Proxy Distribution Using Game...

Milad Nasr (University of Massachusetts Amherst), Sadegh Farhang (Pennsylvania State University), Amir Houmansadr (University of Massachusetts Amherst), Jens Grossklags (Technical University of Munich)

Read More

Robust Performance Metrics for Authentication Systems

Shridatt Sugrim (Rutgers University), Can Liu (Rutgers University), Meghan McLean (Rutgers University), Janne Lindqvist (Rutgers University)

Read More

We Value Your Privacy ... Now Take Some Cookies:...

Martin Degeling (Ruhr-Universität Bochum), Christine Utz (Ruhr-Universität Bochum), Christopher Lentzsch (Ruhr-Universität Bochum), Henry Hosseini (Ruhr-Universität Bochum), Florian Schaub (University of Michigan), Thorsten Holz (Ruhr-Universität Bochum)

Read More