Shujiang Wu (Johns Hopkins University), Pengfei Sun (F5, Inc.), Yao Zhao (F5, Inc.), Yinzhi Cao (Johns Hopkins University)

Browser fingerprints, while traditionally being used for web tracking, have recently been adopted more and more often for defense or detection of various attacks targeting real-world websites. Faced with these situations, adversaries also upgrade their weapons to generate their own fingerprints---defined as adversarial fingerprints---to bypass existing defense or detection. Naturally, such adversarial fingerprints are different from benign ones from user browsers because they are generated intentionally for defense bypass. However, no prior works have studied such differences in the wild by comparing adversarial with benign fingerprints let alone how adversarial fingerprints are generated.

In this paper, we present the first billion-scale measurement study of browser fingerprints collected from 14 major commercial websites (all ranked among Alexa/Tranco top 10,000). We further classify these fingerprints into either adversarial or benign using a learning-based, feedback-driven fraud and bot detection system from a major security company, and then study their differences. Our results draw three major observations: (i) adversarial fingerprints are significantly different from benign ones in many metrics, e.g., entropy, unique rate, and evolution speed, (ii) adversaries are adopting various tools and strategies to generate adversarial fingerprints, and (iii) adversarial fingerprints vary across different attack types, e.g., from content scraping to fraud transactions.

View More Papers

FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities

Samuel Groß (Google), Simon Koch (TU Braunschweig), Lukas Bernhard (Ruhr-University Bochum), Thorsten Holz (CISPA Helmholtz Center for Information Security), Martin Johns (TU Braunschweig)

Read More

CHKPLUG: Checking GDPR Compliance of WordPress Plugins via Cross-language...

Faysal Hossain Shezan (University of Virginia), Zihao Su (University of Virginia), Mingqing Kang (Johns Hopkins University), Nicholas Phair (University of Virginia), Patrick William Thomas (University of Virginia), Michelangelo van Dam (in2it), Yinzhi Cao (Johns Hopkins University), Yuan Tian (UCLA)

Read More

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

WIP: Infrared Laser Reflection Attack Against Traffic Sign Recognition...

Takami Sato (University of California, Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More