Andrea Fioraldi (EURECOM), Alessandro Mantovani (EURECOM), Dominik Maier (TU Berlin), Davide Balzarotti (EURECOM)

AFL is one of the most used and extended fuzzing projects, adopted by industry and academic researchers alike. While the community agrees on AFL’s effectiveness at discovering new vulnerabilities and at its outstanding usability, many of its internal design choices remain untested to date. Security practitioners often clone the project “as-is” and use it as a starting point to develop new techniques, usually taking everything under the hood for granted. Instead, we believe that a careful analysis of the different parameters could help modern fuzzers to improve their performance and explain how each choice can affect the outcome of security testing, either negatively or positively.

The goal of this paper is to provide a comprehensive understanding of the internal mechanisms of AFL by performing experiments and comparing different metrics used to evaluate fuzzers. This will prove the efficacy of some patterns and clarify which aspects are instead outdated. To achieve this, we set up nine unique experiments that we carried out on the popular Fuzzbench platform. Each test focuses on a different aspect of AFL, ranging from its mutation approach to the feedback encoding scheme and the scheduling methodologies.

Our preliminary findings show that each design choice affects different factors of AFL. While some of these are positively correlated with the number of detected bugs or the target coverage, other features are related to usability and reliability. Most important, the outcome of our experiments will indicate which parts of AFL we should preserve in modern fuzzers.

View More Papers

SpiralSpy: Exploring a Stealthy and Practical Covert Channel to...

Zhengxiong Li (University at Buffalo, SUNY), Baicheng Chen (University at Buffalo), Xingyu Chen (University at Buffalo), Huining Li (SUNY University at Buffalo), Chenhan Xu (University at Buffalo, SUNY), Feng Lin (Zhejiang University), Chris Xiaoxuan Lu (University of Edinburgh), Kui Ren (Zhejiang University), Wenyao Xu (SUNY Buffalo)

Read More

DRIVETRUTH: Automated Autonomous Driving Dataset Generation for Security Applications

Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

Read More

Let’s Authenticate: Automated Certificates for User Authentication

James Conners (Brigham Young University), Corey Devenport (Brigham Young University), Stephen Derbidge (Brigham Young University), Natalie Farnsworth (Brigham Young University), Kyler Gates (Brigham Young University), Stephen Lambert (Brigham Young University), Christopher McClain (Brigham Young University), Parker Nichols (Brigham Young University), Daniel Zappala (Brigham Young University)

Read More