Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

WIP: Threat Modeling Laser-Induced Acoustic Interference in Computer Vision-Assisted...

Nina Shamsi (Northeastern University), Kaeshav Chandrasekar, Yan Long, Christopher Limbach (University of Michigan), Keith Rebello (Boeing), Kevin Fu (Northeastern University)

Read More

BARS: Local Robustness Certification for Deep Learning based Traffic...

Kai Wang (Tsinghua University), Zhiliang Wang (Tsinghua University), Dongqi Han (Tsinghua University), Wenqi Chen (Tsinghua University), Jiahai Yang (Tsinghua University), Xingang Shi (Tsinghua University), Xia Yin (Tsinghua University)

Read More

Cooperative Perception for Safe Control of Autonomous Vehicles under...

Hongchao Zhang (Washington University in St. Louis), Zhouchi Li (Worcester Polytechnic Institute), Shiyu Cheng (Washington University in St. Louis), Andrew Clark (Washington University in St. Louis)

Read More

Location Spoofing Attacks on Autonomous Fleets

Jinghan Yang, Andew Estornell, Yevgeniy Vorobeychik (Washington University in St. Louis)

Read More