Cornelius Aschermann (Ruhr-Universität Bochum), Tommaso Frassetto (Technische Universität Darmstadt), Thorsten Holz (Ruhr-Universität Bochum), Patrick Jauernig (Technische Universität Darmstadt), Ahmad-Reza Sadeghi (Technische Universität Darmstadt), Daniel Teuchert (Ruhr-Universität Bochum)

Fuzzing is a well-known method for efficiently identifying bugs in programs.
Unfortunately, when fuzzing targets that require highly-structured inputs such as interpreters, many fuzzing methods struggle to pass the syntax checks.
More specifically, interpreters often process inputs in multiple stages: first syntactic, then semantic correctness is checked. Only if these checks are passed, the interpreted code gets executed.
This prevents fuzzers from executing ``deeper'' --- and hence potentially more interesting --- code.
Typically two valid inputs that lead to the execution of different features in the target application require too many mutations for simple mutation-based fuzzers to discover: making small changes like bit flips usually only leads to the execution of error paths in the parsing engine.
So-called grammar fuzzers are able to pass the syntax checks by using Context-Free Grammars.
Using feedback can significantly increase the efficiency of fuzzing engines.
Hence, it is commonly used in state-of-the-art mutational fuzzers that do not use grammars.
Yet, grammar fuzzers do not make use of code coverage, i.e., they do not know whether any input triggers new functionality or not.

In this paper, we propose NAUTILUS, a method to efficiently fuzz programs that require highly-structured inputs by combining the use of grammars with the use of code coverage feedback.
This allows us to recombine aspects of interesting inputs that were learned individually, and to dramatically increase the probability that any generated input will be accepted by the parser.
We implemented a proof-of-concept fuzzer that we tested on multiple targets, including ChakraCore (the JavaScript engine of Microsoft Edge), PHP, mruby, and Lua.
NAUTILUS identified multiple bugs in all of the targets: Seven in mruby, three in PHP, two in ChakraCore, and one in Lua.
Reporting these bugs was awarded with a sum of 2600 USD and 6 CVEs were assigned.
Our experiments show that combining context-free grammars and feedback-driven fuzzing significantly outperforms state-of-the-art approaches like American Fuzzy Lop (AFL) by an order of magnitude and grammar fuzzers by more than a factor of two when measuring code coverage.

View More Papers

Statistical Privacy for Streaming Traffic

Xiaokuan Zhang (The Ohio State University), Jihun Hamm (The Ohio State University), Michael K. Reiter (University of North Carolina at Chapel Hill), Yinqian Zhang (The Ohio State University)

Read More

Cracking the Wall of Confinement: Understanding and Analyzing Malicious...

Eihal Alowaisheq (Indiana University, King Saud University), Peng Wang (Indiana University), Sumayah Alrwais (King Saud University), Xiaojing Liao (Indiana University), XiaoFeng Wang (Indiana University), Tasneem Alowaisheq (Indiana University, King Saud University), Xianghang Mi (Indiana University), Siyuan Tang (Indiana University), Baojun Liu (Tsinghua University)

Read More

Giving State to the Stateless: Augmenting Trustworthy Computation with...

Gabriel Kaptchuk (Johns Hopkins University), Matthew Green (Johns Hopkins University), Ian Miers (Cornell Tech)

Read More

Measurement and Analysis of Hajime, a Peer-to-peer IoT Botnet

Stephen Herwig (University of Maryland), Katura Harvey (University of Maryland, Max Planck Institute for Software Systems (MPI-SWS)), George Hughey (University of Maryland), Richard Roberts (University of Maryland, Max Planck Institute for Software Systems (MPI-SWS)), Dave Levin (University of Maryland)

Read More