Hongsheng Hu (CSIRO's Data61), Shuo Wang (CSIRO's Data61), Jiamin Chang (University of New South Wales), Haonan Zhong (University of New South Wales), Ruoxi Sun (CSIRO's Data61), Shuang Hao (University of Texas at Dallas), Haojin Zhu (Shanghai Jiao Tong University), Minhui Xue (CSIRO's Data61)

The right to be forgotten requires the removal or "unlearning" of a user's data from machine learning models. However, in the context of Machine Learning as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning request is impractical due to the lack of training data on the service provider's side (the server). Furthermore, approximate unlearning further embraces a complex trade-off between utility (model performance) and privacy (unlearning performance). In this paper, we try to explore the potential threats posed by unlearning services in MLaaS, specifically over-unlearning, where more information is unlearned than expected. We propose two strategies that leverage over-unlearning to measure the impact on the trade-off balancing, under black-box access settings, in which the existing machine unlearning attacks are not applicable. The effectiveness of these strategies is evaluated through extensive experiments on benchmark datasets, across various model architectures and representative unlearning approaches. Results indicate significant potential for both strategies to undermine model efficacy in unlearning scenarios. This study uncovers an underexplored gap between unlearning and contemporary MLaaS, highlighting the need for careful considerations in balancing data unlearning, model utility, and security.

View More Papers

HistCAN: A real-time CAN IDS with enhanced historical traffic...

Shuguo Zhuo, Nuo Li, Kui Ren (The State Key Laboratory of Blockchain and Data Security, Zhejiang University)

Read More

Random Spoofing Attack against Scan Matching Algorithm SLAM (Long)

Masashi Fukunaga (MitsubishiElectric), Takeshi Sugawara (The University of Electro-Communications)

Read More

FirmLine: a Generic Pipeline for Large-Scale Analysis of Non-Linux...

Alexander Balgavy (Independent), Marius Muench (University of Birmingham)

Read More

The CURE to Vulnerabilities in RPKI Validation

Donika Mirdita (Technische Universität Darmstadt), Haya Shulman (Goethe-Universität Frankfurt), Niklas Vogel (Goethe-Universität Frankfurt), Michael Waidner (Technische Universität Darmstadt, Fraunhofer SIT)

Read More