Ahmed Salem (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security)

Machine learning (ML) has established itself as a cornerstone for various critical applications ranging from autonomous driving to authentication systems. However, with this increasing adoption rate of machine learning models, multiple attacks have emerged. One class of such attacks is training time attack, whereby an adversary executes their attack before or during the machine learning model training. In this work, we propose a new training time attack against computer vision based machine learning models, namely model hijacking attack. The adversary aims to hijack a target model to execute a different task than its original one without the model owner noticing. Model hijacking can cause accountability and security risks since a hijacked model owner can be framed for having their model offering illegal or unethical services. Model hijacking attacks are launched in the same way as existing data poisoning attacks. However, one requirement of the model hijacking attack is to be stealthy, i.e., the data samples used to hijack the target model should look similar to the model's original training dataset. To this end, we propose two different model hijacking attacks, namely Chameleon and Adverse Chameleon, based on a novel encoder-decoder style ML model, namely the Camouflager. Our evaluation shows that both of our model hijacking attacks achieve a high attack success rate, with a negligible drop in model utility.

View More Papers

Building Embedded Systems Like It’s 1996

Ruotong Yu (Stevens Institute of Technology, University of Utah), Francesca Del Nin (University of Padua), Yuchen Zhang (Stevens Institute of Technology), Shan Huang (Stevens Institute of Technology), Pallavi Kaliyar (Norwegian University of Science and Technology), Sarah Zakto (Cyber Independent Testing Lab), Mauro Conti (University of Padua, Delft University of Technology), Georgios Portokalidis (Stevens Institute of…

Read More

Demo #6: Attacks on CAN Error Handling Mechanism

Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)

Read More

Demo #12: Too Afraid to Drive: Systematic Discovery of...

Ziwen Wan (UC Irvine), Junjie Shen (UC Irvine), Jalen Chuang (UC Irvine), Xin Xia (UCLA), Joshua Garcia (UC Irvine), Jiaqi Ma (UCLA) and Qi Alfred Chen (UC Irvine)

Read More

Shaduf: Non-Cycle Payment Channel Rebalancing

Zhonghui Ge (Shanghai Jiao Tong University), Yi Zhang (Shanghai Jiao Tong University), Yu Long (Shanghai Jiao Tong University), Dawu Gu (Shanghai Jiao Tong University)

Read More