Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

With the rapid advancement of diffusion-based image-generative models, the quality of generated images has become increasingly photorealistic. Moreover, with the release of high-quality pre-trained image-generative models, a growing number of users are downloading these pre-trained models to fine-tune them with downstream datasets for various image-generation tasks. However, employing such powerful pre-trained models in downstream tasks presents significant privacy leakage risks. In this paper, we propose the first scores-based membership inference attack framework tailored for recent diffusion models, and in the more stringent black-box access setting. Considering four distinct attack scenarios and three types of attacks, this framework is capable of targeting any popular conditional generator model, achieving high precision, evidenced by an impressive AUC of 0.95.

View More Papers

On the Realism of LiDAR Spoofing Attacks against Autonomous...

Takami Sato (University of California, Irvine), Ryo Suzuki (Keio University), Yuki Hayakawa (Keio University), Kazuma Ikeda (Keio University), Ozora Sako (Keio University), Rokuto Nagata (Keio University), Ryo Yoshida (Keio University), Qi Alfred Chen (University of California, Irvine), Kentaro Yoshioka (Keio University)

Read More

The State of https Adoption on the Web

Christoph Kerschbaumer (Mozilla Corporation), Frederik Braun (Mozilla Corporation), Simon Friedberger (Mozilla Corporation), Malte Jürgens (Mozilla Corporation)

Read More

Time-varying Bottleneck Links in LEO Satellite Networks: Identification, Exploits,...

Yangtao Deng (Tsinghua University), Qian Wu (Tsinghua University), Zeqi Lai (Tsinghua University), Chenwei Gu (Tsinghua University), Hewu Li (Tsinghua University), Yuanjie Li (Tsinghua University), Jun Liu (Tsinghua University)

Read More