Diego Ortiz, Leilani Gilpin, Alvaro A. Cardenas (University of California, Santa Cruz)

Autonomous vehicles must operate in a complex environment with various social norms and expectations. While most of the work on securing autonomous vehicles has focused on safety, we argue that we also need to monitor for deviations from various societal “common sense” rules to identify attacks against autonomous systems. In this paper, we provide a first approach to encoding and understanding these common-sense driving behaviors by semi-automatically extracting rules from driving manuals. We encode our driving rules in a formal specification and make our rules available online for other researchers.

View More Papers

FUZZILLI: Fuzzing for JavaScript JIT Compiler Vulnerabilities

Samuel Groß (Google), Simon Koch (TU Braunschweig), Lukas Bernhard (Ruhr-University Bochum), Thorsten Holz (CISPA Helmholtz Center for Information Security), Martin Johns (TU Braunschweig)

Read More

Cyber Threat Intelligence for SOC Analysts

Nidhi Rastogi, Md Tanvirul Alam (Rochester Institute of Technology)

Read More

Assessing the Impact of Interface Vulnerabilities in Compartmentalized Software

Hugo Lefeuvre (The University of Manchester), Vlad-Andrei Bădoiu (University Politehnica of Bucharest), Yi Chen (Rice University), Felipe Huici (Unikraft.io), Nathan Dautenhahn (Rice University), Pierre Olivier (The University of Manchester)

Read More

Hope of Delivery: Extracting User Locations From Mobile Instant...

Theodor Schnitzler (Research Center Trustworthy Data Science and Security, TU Dortmund, and Ruhr-Universität Bochum), Katharina Kohls (Radboud University), Evangelos Bitsikas (Northeastern University and New York University Abu Dhabi), Christina Pöpper (New York University Abu Dhabi)

Read More