Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

Demo #7: Automated Tracking System For LiDAR Spoofing Attacks...

Yulong Cao, Jiaxiang Ma, Kevin Fu (University of Michigan), Sara Rampazzi (University of Florida), and Z. Morley Mao (University of Michigan) Best Demo Award Runner-up ($200 cash prize)!

Read More

30 Years into Scientific Binary Decompilation: What We Have...

Dr. Ruoyu (Fish) Wang, Assistant Professor at Arizona State University

Read More

Demo #3: Detecting Illicit Drone Video Filming Using Cryptanalysis

Ben Nassi, Raz Ben-Netanel (Ben-Gurion University of the Negev), Adi Shamir (Weizmann Institute of Science), and Yuval Elovic (Ben-Gurion University of the Negev)

Read More

D-Box: DMA-enabled Compartmentalization for Embedded Applications

Alejandro Mera (Northeastern University), Yi Hui Chen (Northeastern University), Ruimin Sun (Northeastern University), Engin Kirda (Northeastern University), Long Lu (Northeastern University)

Read More