Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

First, Fuzz the Mutants

Alex Groce (Northern Arizona Univerisity), Goutamkumar Kalburgi (Northern Arizona Univerisity), Claire Le Goues (Carnegie Mellon University), Kush Jain (Carnegie Mellon University), Rahul Gopinath (Saarland University)

Read More

FitM: Binary-Only Coverage-GuidedFuzzing for Stateful Network Protocols

Dominik Maier, Otto Bittner, Marc Munier, Julian Beier (TU Berlin)

Read More

Kasper: Scanning for Generalized Transient Execution Gadgets in the...

Brian Johannesmeyer (VU Amsterdam), Jakob Koschel (VU Amsterdam), Kaveh Razavi (ETH Zurich), Herbert Bos (VU Amsterdam), Cristiano Giuffrida (VU Amsterdam)

Read More