Cornelius Aschermann (Ruhr-Universität Bochum), Tommaso Frassetto (Technische Universität Darmstadt), Thorsten Holz (Ruhr-Universität Bochum), Patrick Jauernig (Technische Universität Darmstadt), Ahmad-Reza Sadeghi (Technische Universität Darmstadt), Daniel Teuchert (Ruhr-Universität Bochum)

Fuzzing is a well-known method for efficiently identifying bugs in programs.
Unfortunately, when fuzzing targets that require highly-structured inputs such as interpreters, many fuzzing methods struggle to pass the syntax checks.
More specifically, interpreters often process inputs in multiple stages: first syntactic, then semantic correctness is checked. Only if these checks are passed, the interpreted code gets executed.
This prevents fuzzers from executing ``deeper'' --- and hence potentially more interesting --- code.
Typically two valid inputs that lead to the execution of different features in the target application require too many mutations for simple mutation-based fuzzers to discover: making small changes like bit flips usually only leads to the execution of error paths in the parsing engine.
So-called grammar fuzzers are able to pass the syntax checks by using Context-Free Grammars.
Using feedback can significantly increase the efficiency of fuzzing engines.
Hence, it is commonly used in state-of-the-art mutational fuzzers that do not use grammars.
Yet, grammar fuzzers do not make use of code coverage, i.e., they do not know whether any input triggers new functionality or not.

In this paper, we propose NAUTILUS, a method to efficiently fuzz programs that require highly-structured inputs by combining the use of grammars with the use of code coverage feedback.
This allows us to recombine aspects of interesting inputs that were learned individually, and to dramatically increase the probability that any generated input will be accepted by the parser.
We implemented a proof-of-concept fuzzer that we tested on multiple targets, including ChakraCore (the JavaScript engine of Microsoft Edge), PHP, mruby, and Lua.
NAUTILUS identified multiple bugs in all of the targets: Seven in mruby, three in PHP, two in ChakraCore, and one in Lua.
Reporting these bugs was awarded with a sum of 2600 USD and 6 CVEs were assigned.
Our experiments show that combining context-free grammars and feedback-driven fuzzing significantly outperforms state-of-the-art approaches like American Fuzzy Lop (AFL) by an order of magnitude and grammar fuzzers by more than a factor of two when measuring code coverage.

View More Papers

Practical Hidden Voice Attacks against Speech and Speaker Recognition...

Hadi Abdullah (University of Florida), Washington Garcia (University of Florida), Christian Peeters (University of Florida), Patrick Traynor (University of Florida), Kevin R. B. Butler (University of Florida), Joseph Wilson (University of Florida)

Read More

Coconut: Threshold Issuance Selective Disclosure Credentials with Applications to...

Alberto Sonnino (University College London (UCL)), Mustafa Al-Bassam (University College London (UCL)), Shehar Bano (University College London (UCL)), Sarah Meiklejohn (University College London (UCL)), George Danezis (University College London (UCL))

Read More

Tranco: A Research-Oriented Top Sites Ranking Hardened Against Manipulation

Victor Le Pochat (imec-DistriNet, KU Leuven), Tom Van Goethem (imec-DistriNet, KU Leuven), Samaneh Tajalizadehkhoob (Delft University of Technology), Maciej Korczyński (Grenoble Alps University), Wouter Joosen (imec-DistriNet, KU Leuven)

Read More

ML-Leaks: Model and Data Independent Membership Inference Attacks and...

Ahmed Salem (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security), Mathias Humbert (Swiss Data Science Center, ETH Zurich/EPFL), Pascal Berrang (CISPA Helmholtz Center for Information Security), Mario Fritz (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security)

Read More