Alexander Warnecke (TU Braunschweig), Lukas Pirch (TU Braunschweig), Christian Wressnegger (Karlsruhe Institute of Technology (KIT)), Konrad Rieck (TU Braunschweig)

Removing information from a machine learning model is a non-trivial task that requires to partially revert the training process. This task is unavoidable when sensitive data, such as credit card numbers or passwords, accidentally enter the model and need to be removed afterwards. Recently, different concepts for machine unlearning have been proposed to address this problem. While these approaches are effective in removing individual data points, they do not scale to scenarios where larger groups of features and labels need to be reverted. In this paper, we propose the first method for unlearning features and labels. Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters. It enables to adapt the influence of training data on a learning model retrospectively, thereby correcting data leaks and privacy issues. For learning models with strongly convex loss functions, our method provides certified unlearning with theoretical guarantees. For models with non-convex losses, we empirically show that unlearning features and labels is effective and significantly faster than other strategies.

View More Papers

PPA: Preference Profiling Attack Against Federated Learning

Chunyi Zhou (Nanjing University of Science and Technology), Yansong Gao (Nanjing University of Science and Technology), Anmin Fu (Nanjing University of Science and Technology), Kai Chen (Chinese Academy of Science), Zhiyang Dai (Nanjing University of Science and Technology), Zhi Zhang (CSIRO's Data61), Minhui Xue (CSIRO's Data61), Yuqing Zhang (University of Chinese Academy of Science)

Read More

Towards Privacy-Preserving Platooning Services by means of Homomorphic Encryption

Nicolas Quero (Expleo France), Aymen Boudguiga (CEA LIST), Renaud Sirdey (CEA LIST), Nadir Karam (Expleo France)

Read More

Un-Rocking Drones: Foundations of Acoustic Injection Attacks and Recovery...

Jinseob Jeong (KAIST, Agency for Defense Development), Dongkwan Kim (Samsung SDS), Joonha Jang (KAIST), Juhwan Noh (KAIST), Changhun Song (KAIST), Yongdae Kim (KAIST)

Read More

No Grammar, No Problem: Towards Fuzzing the Linux Kernel...

Alexander Bulekov (Boston University), Bandan Das (Red Hat), Stefan Hajnoczi (Red Hat), Manuel Egele (Boston University)

Read More