Shuo Shao (Zhejiang University), Yiming Li (Zhejiang University), Hongwei Yao (Zhejiang University), Yiling He (Zhejiang University), Zhan Qin (Zhejiang University), Kui Ren (Zhejiang University)

Ownership verification is currently the most critical and widely adopted post-hoc method to safeguard model copyright. In general, model owners exploit it to identify whether a given suspicious third-party model is stolen from them by examining whether it has particular properties `inherited' from their released models. Currently, backdoor-based model watermarks are the primary and cutting-edge methods to implant such properties in the released models. However, backdoor-based methods have two fatal drawbacks, including emph{harmfulness} and emph{ambiguity}. The former indicates that they introduce maliciously controllable misclassification behaviors ($i.e.$, backdoor) to the watermarked released models. The latter denotes that malicious users can easily pass the verification by finding other misclassified samples, leading to ownership ambiguity.

In this paper, we argue that both limitations stem from the 'zero-bit' nature of existing watermarking schemes, where they exploit the status ($i.e.$, misclassified) of predictions for verification. Motivated by this understanding, we design a new watermarking paradigm, $i.e.$, Explanation as a Watermark (EaaW), that implants verification behaviors into the explanation of feature attribution instead of model predictions. Specifically, EaaW embeds a `multi-bit' watermark into the feature attribution explanation of specific trigger samples without changing the original prediction. We correspondingly design the watermark embedding and extraction algorithms inspired by explainable artificial intelligence. In particular, our approach can be used for different tasks ($e.g.$, image classification and text generation). Extensive experiments verify the effectiveness and harmlessness of our EaaW and its resistance to potential attacks.

View More Papers

Siniel: Distributed Privacy-Preserving zkSNARK

Yunbo Yang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Yuejia Cheng (Shanghai DeCareer Consulting Co., Ltd), Kailun Wang (Beijing Jiaotong University), Xiaoguo Li (College of Computer Science, Chongqing University), Jianfei Sun (School of Computing and Information Systems, Singapore Management University), Jiachen Shen (Shanghai Key Laboratory of Trustworthy Computing, East China Normal…

Read More

The Discriminative Power of Cross-layer RTTs in Fingerprinting Proxy...

Diwen Xue (University of Michigan), Robert Stanley (University of Michigan), Piyush Kumar (University of Michigan), Roya Ensafi (University of Michigan)

Read More

Cascading Spy Sheets: Exploiting the Complexity of Modern CSS...

Leon Trampert (CISPA Helmholtz Center for Information Security), Daniel Weber (CISPA Helmholtz Center for Information Security), Lukas Gerlach (CISPA Helmholtz Center for Information Security), Christian Rossow (CISPA Helmholtz Center for Information Security), Michael Schwarz (CISPA Helmholtz Center for Information Security)

Read More

Decoupling Permission Management from Cryptography for Privacy-Preserving Systems

Ruben De Smet (Department of Engineering Technology (INDI), Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel), Tom Godden (Department of Engineering Technology (INDI), Vrije Universiteit Brussel), Kris Steenhaut (Department of Engineering Technology (INDI), Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel), An Braeken (Department of Engineering Technology (INDI), Vrije Universiteit Brussel)

Read More