Shuo Shao (Zhejiang University), Yiming Li (Zhejiang University), Hongwei Yao (Zhejiang University), Yiling He (Zhejiang University), Zhan Qin (Zhejiang University), Kui Ren (Zhejiang University)

Ownership verification is currently the most critical and widely adopted post-hoc method to safeguard model copyright. In general, model owners exploit it to identify whether a given suspicious third-party model is stolen from them by examining whether it has particular properties `inherited' from their released models. Currently, backdoor-based model watermarks are the primary and cutting-edge methods to implant such properties in the released models. However, backdoor-based methods have two fatal drawbacks, including emph{harmfulness} and emph{ambiguity}. The former indicates that they introduce maliciously controllable misclassification behaviors ($i.e.$, backdoor) to the watermarked released models. The latter denotes that malicious users can easily pass the verification by finding other misclassified samples, leading to ownership ambiguity.

In this paper, we argue that both limitations stem from the 'zero-bit' nature of existing watermarking schemes, where they exploit the status ($i.e.$, misclassified) of predictions for verification. Motivated by this understanding, we design a new watermarking paradigm, $i.e.$, Explanation as a Watermark (EaaW), that implants verification behaviors into the explanation of feature attribution instead of model predictions. Specifically, EaaW embeds a `multi-bit' watermark into the feature attribution explanation of specific trigger samples without changing the original prediction. We correspondingly design the watermark embedding and extraction algorithms inspired by explainable artificial intelligence. In particular, our approach can be used for different tasks ($e.g.$, image classification and text generation). Extensive experiments verify the effectiveness and harmlessness of our EaaW and its resistance to potential attacks.

View More Papers

Passive Inference Attacks on Split Learning via Adversarial Regularization

Xiaochen Zhu (National University of Singapore & Massachusetts Institute of Technology), Xinjian Luo (National University of Singapore & Mohamed bin Zayed University of Artificial Intelligence), Yuncheng Wu (Renmin University of China), Yangfan Jiang (National University of Singapore), Xiaokui Xiao (National University of Singapore), Beng Chin Ooi (National University of Singapore)

Read More

Wallbleed: A Memory Disclosure Vulnerability in the Great Firewall...

Shencha Fan (GFW Report), Jackson Sippe (University of Colorado Boulder), Sakamoto San (Shinonome Lab), Jade Sheffey (UMass Amherst), David Fifield (None), Amir Houmansadr (UMass Amherst), Elson Wedwards (None), Eric Wustrow (University of Colorado Boulder)

Read More

VulShield: Protecting Vulnerable Code Before Deploying Patches

Yuan Li (Zhongguancun Laboratory & Tsinghua University), Chao Zhang (Tsinghua University & JCSS & Zhongguancun Laboratory), Jinhao Zhu (UC Berkeley), Penghui Li (Zhongguancun Laboratory), Chenyang Li (Peking University), Songtao Yang (Zhongguancun Laboratory), Wende Tan (Tsinghua University)

Read More

NDSS Symposium 2025 Welcome and Opening Remarks

General Chairs: David Balenson, USC Information Sciences Institute and Heng Yin, University of California, Riverside Program Chairs: Christina Pöpper, New York University Abu Dhabi and Hamed Okhravi, MIT Lincoln Laboratory Artifact Evaluation Chairs: Daniele Cono D’Elia, Sapienza University and Mathy Vanhoef, KU Leuven

Read More