Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A...

Chong Xiang (Princeton University), Chawin Sitawarin (University of California, Berkeley), Tong Wu (Princeton University), Prateek Mittal (Princeton University)

Read More

WIP: Infrared Laser Reflection Attack Against Traffic Sign Recognition...

Takami Sato (University of California, Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Accountable Javascript Code Delivery

Ilkan Esiyok (CISPA Helmholtz Center for Information Security), Pascal Berrang (University of Birmingham & Nimiq), Katriel Cohn-Gordon (Meta), Robert Künnemann (CISPA Helmholtz Center for Information Security)

Read More