Shiming Wang (Shanghai Jiao Tong University), Zhe Ji (Shanghai Jiao Tong University), Liyao Xiang (Shanghai Jiao Tong University), Hao Zhang (Shanghai Jiao Tong University), Xinbing Wang (Shanghai Jiao Tong University), Chenghu Zhou (Chinese Academy of Sciences), Bo Li (Hong Kong University of Science and Technology)

With the increased capabilities at the edge (e.g., mobile device) and more stringent privacy requirement, it becomes a recent trend for deep learning-enabled applications to pre-process sensitive raw data at the edge and transmit the features to the backend cloud for further processing. A typical application is to run machine learning (ML) services on facial images collected from different individuals. To prevent identity theft, conventional methods commonly rely on an adversarial game-based approach to shed the identity information from the feature. However, such methods can not defend against adaptive attacks, in which an attacker takes a countermove against a known defence strategy.

We propose Crafter, a feature crafting mechanism deployed at the edge, to protect the identity information from adaptive model inversion attacks while ensuring the ML tasks are properly carried out in the cloud. The key defence strategy is to mislead the attacker to a non-private prior from which the attacker gains little about the private identity. In this case, the crafted features act like poison training samples for attackers with adaptive model updates. Experimental results indicate that Crafter successfully defends both basic and possible adaptive attacks, which can not be achieved by state-of-the-art adversarial game-based methods.

View More Papers

50 Shades of Support: A Device-Centric Analysis of Android...

Abbas Acar (Florida International University), Güliz Seray Tuncay (Google), Esteban Luques (Florida International University), Harun Oz (Florida International University), Ahmet Aris (Florida International University), Selcuk Uluagac (Florida International University)

Read More

The Impact of Workload on Phishing Susceptibility: An Experiment

Sijie Zhuo (University of Auckland), Robert Biddle (University of Auckland and Carleton University, Ottawa), Lucas Betts, Nalin Asanka Gamagedara Arachchilage, Yun Sing Koh, Danielle Lottridge, Giovanni Russello (University of Auckland)

Read More

MPCDiff: Testing and Repairing MPC-Hardened Deep Learning Models

Qi Pang (Carnegie Mellon University), Yuanyuan Yuan (HKUST), Shuai Wang (HKUST)

Read More

Not your Type! Detecting Storage Collision Vulnerabilities in Ethereum...

Nicola Ruaro (University of California, Santa Barbara), Fabio Gritti (University of California, Santa Barbara), Robert McLaughlin (University of California, Santa Barbara), Ilya Grishchenko (University of California, Santa Barbara), Christopher Kruegel (University of California, Santa Barbara), Giovanni Vigna (University of California, Santa Barbara)

Read More