Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

Building Embedded Systems Like It’s 1996

Ruotong Yu (Stevens Institute of Technology, University of Utah), Francesca Del Nin (University of Padua), Yuchen Zhang (Stevens Institute of Technology), Shan Huang (Stevens Institute of Technology), Pallavi Kaliyar (Norwegian University of Science and Technology), Sarah Zakto (Cyber Independent Testing Lab), Mauro Conti (University of Padua, Delft University of Technology), Georgios Portokalidis (Stevens Institute of…

Read More

Fine-Grained Coverage-Based Fuzzing

Bernard Nongpoh (Université Paris Saclay), Marwan Nour (Université Paris Saclay), Michaël Marcozzi (Université Paris Saclay), Sébastien Bardin (Université Paris Saclay)

Read More

Demo #9: Dynamic Time Warping as a Tool for...

Mars Rayno (Colorado State University) and Jeremy Daily (Colorado State University)

Read More