Bernard Nongpoh (Université Paris Saclay), Marwan Nour (Université Paris Saclay), Michaël Marcozzi (Université Paris Saclay), Sébastien Bardin (Université Paris Saclay)

Fuzzing is an effective software testing method that discovers bugs by feeding target applications with (usually a massive amount of) automatically generated inputs. Many state-of-art fuzzers use branch coverage as a feedback metric to guide the fuzzing process. The fuzzer retains inputs for further mutation only if branch coverage is increased. However, branch coverage only provides a shallow sampling of program behaviours and hence may discard inputs that might be interesting to mutate. This work aims at taking advantage of the large body of research over defining finer-grained code coverage metrics (such as mutation coverage) and use these metrics as better proxies to select interesting inputs for mutation. We propose to make coverage-based fuzzers support most fine-grained coverage metrics out of the box (i.e., without changing fuzzer internals). We achieve this by making the test objectives defined by these metrics (such as mutants to kill) explicit as new branches in the target program. Fuzzing such a modified target is then equivalent to fuzzing the original target, but the fuzzer will also retain inputs covering the additional metrics objectives for mutation. We propose a preliminary evaluation of this novel idea using two state-of-art fuzzers, namely AFL++(3.14c) and QSYM with AFL(2.52b), on the four standard LAVA-M benchmarks. Significantly positive results are obtained on one benchmark and marginally negative ones on the three others. We discuss directions towards a strong and complete evaluation of the proposed approach and call for early feedback from the fuzzing community.

View More Papers

Demo #1: Security of Multi-Sensor Fusion based Perception in...

Yulong Cao (University of Michigan), Ningfei Wang (UC, Irvine), Chaowei Xiao (Arizona State University), Dawei Yang (University of Michigan), Jin Fang (Baidu Research), Ruigang Yang (University of Michigan), Qi Alfred Chen (UC, Irvine), Mingyan Liu (University of Michigan) and Bo Li (University of Illinois at Urbana-Champaign)

Read More

CFInsight: A Comprehensive Metric for CFI Policies

Tommaso Frassetto (Technical University of Darmstadt), Patrick Jauernig (Technical University of Darmstadt), David Koisser (Technical University of Darmstadt), Ahmad-Reza Sadeghi (Technical University of Darmstadt)

Read More

ROV-MI: Large-Scale, Accurate and Efficient Measurement of ROV Deployment

Wenqi Chen (Tsinghua University), Zhiliang Wang (Tsinghua University), Dongqi Han (Tsinghua University), Chenxin Duan (Tsinghua University), Xia Yin (Tsinghua University), Jiahai Yang (Tsinghua University), Xingang Shi (Tsinghua University)

Read More

Explainable AI in Cybersecurity Operations: Lessons Learned from xAI...

Megan Nyre-Yu (Sandia National Laboratories), Elizabeth S. Morris (Sandia National Laboratories), Blake Moss (Sandia National Laboratories), Charles Smutz (Sandia National Laboratories), Michael R. Smith (Sandia National Laboratories)

Read More