Qi Xia (University of Texas at San Antonio), Qian Chen (University of Texas at San Antonio)

Traditional black-box adversarial attacks on computer vision models face significant limitations, including intensive querying requirements, time-consuming iterative processes, a lack of universality, and low attack success rates (ASR) and confidence levels (CL) due to subtle perturbations. This paper introduces AlphaDog, an Alpha channel attack, the first universally efficient targeted no-box attack, exploiting the often overlooked Alpha channel in RGBA images to create visual disparities between human perception and machine interpretation, efficiently deceiving both. Specifically, AlphaDog maliciously sets the RGB channels to represent the desired object for AI recognition, while crafting the Alpha channel to create a different perception for humans when blended with a standard or default background color of digital media (thumbnail or image viewer apps). Leveraging differences in how AI models and human vision process transparency, AlphaDog outperforms existing adversarial attacks in four key ways: (i) as a no-box attack, it requires zero queries; (ii) it achieves highly efficient generation, taking milliseconds to produce arbitrary attack images; (iii) AlphaDog can be universally applied, compromising most AI models with a single attack image; (iv) it guarantees 100% ASR and CL. The assessment of 6,500 AlphaDog attack examples across 100 state-of-the-art image recognition systems demonstrates AlphaDog's effectiveness, and an IRB-approved experiment involving 20 college-age participants validates AlphaDog's stealthiness. AlphaDog can be applied in data poisoning, evasion attacks, and content moderation. Additionally, a novel pixel-intensity histogram-based detection method is introduced to identify AlphaDog, achieving 100% effectiveness in detecting and protecting computer vision models against AlphaDog. Demos are available on the AlphaDog website (https://sites.google.com/view/alphachannelattack/home).

View More Papers

Siniel: Distributed Privacy-Preserving zkSNARK

Yunbo Yang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Yuejia Cheng (Shanghai DeCareer Consulting Co., Ltd), Kailun Wang (Beijing Jiaotong University), Xiaoguo Li (College of Computer Science, Chongqing University), Jianfei Sun (School of Computing and Information Systems, Singapore Management University), Jiachen Shen (Shanghai Key Laboratory of Trustworthy Computing, East China Normal…

Read More

NodeMedic-FINE: Automatic Detection and Exploit Synthesis for Node.js Vulnerabilities

Darion Cassel (Carnegie Mellon University), Nuno Sabino (IST & CMU), Min-Chien Hsu (Carnegie Mellon University), Ruben Martins (Carnegie Mellon University), Limin Jia (Carnegie Mellon University)

Read More

Speak Up, I’m Listening: Extracting Speech from Zero-Permission VR...

Derin Cayir (Florida International University), Reham Mohamed Aburas (American University of Sharjah), Riccardo Lazzeretti (Sapienza University of Rome), Marco Angelini (Link Campus University of Rome), Abbas Acar (Florida International University), Mauro Conti (University of Padua), Z. Berkay Celik (Purdue University), Selcuk Uluagac (Florida International University)

Read More

Mnemocrypt

André Pacteau, Antonino Vitale, Davide Balzarotti, Simone Aonzo (EURECOM)

Read More