Yan Pang (University of Virginia), Tianhao Wang (University of Virginia)

With the rapid advancement of diffusion-based image-generative models, the quality of generated images has become increasingly photorealistic. Moreover, with the release of high-quality pre-trained image-generative models, a growing number of users are downloading these pre-trained models to fine-tune them with downstream datasets for various image-generation tasks. However, employing such powerful pre-trained models in downstream tasks presents significant privacy leakage risks. In this paper, we propose the first scores-based membership inference attack framework tailored for recent diffusion models, and in the more stringent black-box access setting. Considering four distinct attack scenarios and three types of attacks, this framework is capable of targeting any popular conditional generator model, achieving high precision, evidenced by an impressive AUC of 0.95.

View More Papers

Speak Up, I’m Listening: Extracting Speech from Zero-Permission VR...

Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

Read More

An Empirical Study on Fingerprint API Misuse with Lifecycle...

Xin Zhang (Fudan University), Xiaohan Zhang (Fudan University), Zhichen Liu (Fudan University), Bo Zhao (Fudan University), Zhemin Yang (Fudan University), Min Yang (Fudan University)

Read More

Space Cybersecurity Testbed: Fidelity Framework, Example Implementation, and Characterization

Jose Luis Castanon Remy, Caleb Chang, Ekzhin Ear, Shouhuai Xu (University of Colorado Colorado Springs (UCCS))

Read More