Yaniv David (Columbia University), Neophytos Christou (Brown University), Andreas D. Kellas (Columbia University), Vasileios P. Kemerlis (Brown University), Junfeng Yang (Columbia University)

Managed languages facilitate convenient ways for serializing objects, allowing applications to persist and transfer them easily, yet this feature opens them up to attacks. By manipulating serialized objects, attackers can trigger a chained execution of existing code segments, using them as gadgets to form an exploit. Protecting deserialization calls against attacks is cumbersome and tedious, leading to many developers avoiding deploying defenses properly. We present QUACK, a framework for automatically protecting applications by fixing calls to deserialization APIs. This “binding” limits the classes allowed for usage in the deserialization process, severely limiting the code available for (ab)use as part of exploits. QUACK computes the set of classes that should be allowed using a novel static duck typing inference technique. In particular, it statically collects all statements in the program code that manipulate objects after they are deserialized, and puts together a filter for the list of classes that should be available at runtime. We have implemented QUACK for PHP and evaluated it on a set of applications with known CVEs, and popular applications crawled from GitHub. QUACK managed to fix the applications in a way that prevented any attempt at automatically generating an exploit against them, by blocking, on average, 97% of the application’s code that could be used as gadgets. We submitted a sample of three fixes generated by QUACK as pull requests, and their developers merged them.

View More Papers

Sticky Fingers: Resilience of Satellite Fingerprinting against Jamming Attacks

Joshua Smailes (University of Oxford), Edd Salkield (University of Oxford), Sebastian Köhler (University of Oxford), Simon Birnbach (University of Oxford), Martin Strohmeier (Cyber-Defence Campus, armasuisse S+T), Ivan Martinovic (University of Oxford)

Read More

Improving the Robustness of Transformer-based Large Language Models with...

Lujia Shen (Zhejiang University), Yuwen Pu (Zhejiang University), Shouling Ji (Zhejiang University), Changjiang Li (Penn State), Xuhong Zhang (Zhejiang University), Chunpeng Ge (Shandong University), Ting Wang (Penn State)

Read More

FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting

Meenatchi Sundaram Muthu Selva Annamalai (University College London), Igor Bilogrevic (Google), Emiliano De Cristofaro (University of California, Riverside)

Read More

Benchmarking transferable adversarial attacks

Zhibo Jin (The University of Sydney), Jiayu Zhang (Suzhou Yierqi), Zhiyu Zhu, Huaming Chen (The University of Sydney)

Read More