Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

CANCloak: Deceiving Two ECUs with One Frame

Li Yue, Zheming Li, Tingting Yin, and Chao Zhang (Tsinghua University)

Read More

Above and Beyond: Organizational Efforts to Complement U.S. Digital...

Rock Stevens (University of Maryland), Faris Bugra Kokulu (Arizona State University), Adam Doupé (Arizona State University), Michelle L. Mazurek (University of Maryland)

Read More

Hiding My Real Self! Protecting Intellectual Property in Additive...

Sizhuang Liang (Georgia Institute of Technology), Saman Zonouz (Rutgers University), Raheem Beyah (Georgia Institute of Technology)

Read More

An In-Depth Analysis on Adoption of Attack Mitigations in...

Ruotong Yu (Stevens Institute of Technology, University of Utah), Yuchen Zhang, Shan Huang (Stevens Institute of Technology)

Read More