Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)

With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.

View More Papers

WIP: Interrupt Attack on TEE-protected Robotic Vehicles

Mulong Luo (Cornell University) and G. Edward Suh (Cornell University)

Read More

Demo #3: Detecting Illicit Drone Video Filming Using Cryptanalysis

Ben Nassi, Raz Ben-Netanel (Ben-Gurion University of the Negev), Adi Shamir (Weizmann Institute of Science), and Yuval Elovic (Ben-Gurion University of the Negev)

Read More

Effects of Knowledge and Experience on Privacy Decision-Making in...

Zekun Cai (Penn State University), Aiping Xiong (Penn State University)

Read More

Demo #4: Attacking Tesla Model X’s Autopilot Using Compromised...

Ben Nassi (Ben-Gurion University of the Negev), Yisroel Mirsky (Ben-Gurion University of the Negev, Georgia Tech), Dudi Nassi, Raz Ben Netanel (Ben-Gurion University of the Negev), Oleg Drokin (Independent Researcher), and Yuval Elovici (Ben-Gurion University of the Negev) Best Demo Award Winner ($300 cash prize)!

Read More