Ashish Hooda (University of Wisconsin-Madison), Andrey Labunets (UC San Diego), Tadayoshi Kohno (University of Washington), Earlence Fernandes (UC San Diego)

Content scanning systems employ perceptual hashing algorithms to scan user content for illegal material, such as child pornography or terrorist recruitment flyers. Perceptual hashing algorithms help determine whether two images are visually similar while preserving the privacy of the input images. Several efforts from industry and academia propose scanning on client devices such as smartphones due to the impending rollout of end-to-end encryption that will make server-side scanning difficult. These proposals have met with strong criticism because of the potential for the technology to be misused for censorship. However, the risks of this technology in the context of surveillance are not well understood. Our work informs this conversation by experimentally characterizing the potential for one type of misuse --- attackers manipulating the content scanning system to perform physical surveillance on target locations. Our contributions are threefold: (1) we offer a definition of physical surveillance in the context of client-side image scanning systems; (2) we experimentally characterize this risk and create a surveillance algorithm that achieves physical surveillance rates of more than 30% by poisoning 0.2% of the perceptual hash database; (3) we experimentally study the trade-off between the robustness of client-side image scanning systems and surveillance, showing that more robust detection of illegal material leads to an increased potential for physical surveillance in most settings.

View More Papers

Unus pro omnibus: Multi-Client Searchable Encryption via Access Control

Jiafan Wang (Data61, CSIRO), Sherman S. M. Chow (The Chinese University of Hong Kong)

Read More

Sticky Fingers: Resilience of Satellite Fingerprinting against Jamming Attacks

Joshua Smailes (University of Oxford), Edd Salkield (University of Oxford), Sebastian Köhler (University of Oxford), Simon Birnbach (University of Oxford), Martin Strohmeier (Cyber-Defence Campus, armasuisse S+T), Ivan Martinovic (University of Oxford)

Read More

Stacking up the LLM Risks: Applied Machine Learning Security

Dr. Gary McGraw, Berryville Institute of Machine Learning

Read More

WIP: Adversarial Object-Evasion Attack Detection in Autonomous Driving Contexts:...

Rao Li (The Pennsylvania State University), Shih-Chieh Dai (Pennsylvania State University), Aiping Xiong (Penn State University)

Read More