Shuo Shao (Zhejiang University), Yiming Li (Zhejiang University), Hongwei Yao (Zhejiang University), Yiling He (Zhejiang University), Zhan Qin (Zhejiang University), Kui Ren (Zhejiang University)

Ownership verification is currently the most critical and widely adopted post-hoc method to safeguard model copyright. In general, model owners exploit it to identify whether a given suspicious third-party model is stolen from them by examining whether it has particular properties `inherited' from their released models. Currently, backdoor-based model watermarks are the primary and cutting-edge methods to implant such properties in the released models. However, backdoor-based methods have two fatal drawbacks, including emph{harmfulness} and emph{ambiguity}. The former indicates that they introduce maliciously controllable misclassification behaviors ($i.e.$, backdoor) to the watermarked released models. The latter denotes that malicious users can easily pass the verification by finding other misclassified samples, leading to ownership ambiguity.

In this paper, we argue that both limitations stem from the 'zero-bit' nature of existing watermarking schemes, where they exploit the status ($i.e.$, misclassified) of predictions for verification. Motivated by this understanding, we design a new watermarking paradigm, $i.e.$, Explanation as a Watermark (EaaW), that implants verification behaviors into the explanation of feature attribution instead of model predictions. Specifically, EaaW embeds a `multi-bit' watermark into the feature attribution explanation of specific trigger samples without changing the original prediction. We correspondingly design the watermark embedding and extraction algorithms inspired by explainable artificial intelligence. In particular, our approach can be used for different tasks ($e.g.$, image classification and text generation). Extensive experiments verify the effectiveness and harmlessness of our EaaW and its resistance to potential attacks.

View More Papers

Automated Expansion of Privacy Data Taxonomy for Compliant Data...

Yue Qin (Indiana University Bloomington & Central University of Finance and Economics), Yue Xiao (Indiana University Bloomington & IBM Research), Xiaojing Liao (Indiana University Bloomington)

Read More

JBomAudit: Assessing the Landscape, Compliance, and Security Implications of...

Yue Xiao (IBM Research), Dhilung Kirat (IBM Research), Douglas Lee Schales (IBM Research), Jiyong Jang (IBM Research), Luyi Xing (Indiana University Bloomington), Xiaojing Liao (Indiana University)

Read More

Incorporating Gradients to Rules: Towards Lightweight, Adaptive Provenance-based Intrusion...

Lingzhi Wang (Northwestern University), Xiangmin Shen (Northwestern University), Weijian Li (Northwestern University), Zhenyuan LI (Zhejiang University), R. Sekar (Stony Brook University), Han Liu (Northwestern University), Yan Chen (Northwestern University)

Read More

THEMIS: Regulating Textual Inversion for Personalized Concept Censorship

Yutong Wu (Nanyang Technological University), Jie Zhang (Centre for Frontier AI Research, Agency for Science, Technology and Research (A*STAR), Singapore), Florian Kerschbaum (University of Waterloo), Tianwei Zhang (Nanyang Technological University)

Read More