Automotive and Autonomous Vehicle Security (AutoSec) Workshop 2022
Note: All times are in PDT (UTC-7) and all sessions are held in Kon Tiki Ballroom.
Best Demo Award Voting (end at 4:40pm): https://www.surveymonkey.com/r/7BFPCJD
Future of AutoSec Voting (always open for your inputs!): https://www.surveymonkey.com/r/9Q7JJMH
Sunday, 24 April
-
Speaker's Biography
Dongyan Xu is a Samuel Conte Professor of Computer Science at Purdue University and director of CERIAS, Purdue’s cybersecurity research and education center. His research focuses on cyber and cyber-physical security, especially for emerging platforms such as robotic vehicles, IoT, and digital manufacturing systems. Past and current sponsors of his research include the AFOSR, AFRL, CERDEC, DARPA, DOE, IARPA, NSA, NSF, ONR, and Sandia National Labs. He has received multiple awards from major cybersecurity conferences for his research papers on kernel malware defense, memory forensics, advanced persistent threat (APT) analytics, and IoT vulnerability discovery.
-
Yulong Cao (University of Michigan), Ningfei Wang (UC, Irvine), Chaowei Xiao (Arizona State University), Dawei Yang (University of Michigan), Jin Fang (Baidu Research), Ruigang Yang (University of Michigan), Qi Alfred Chen (UC, Irvine), Mingyan Liu (University of Michigan) and Bo Li (University of Illinois at Urbana-Champaign)
In autonomous driving (AD) vehicles, Multi-Sensor Fusion (MSF) is used to combine perception results from multiple sensors such as LiDARs (Light Detection And Ranging) and cameras for both accuracy and robustness. In this work, we design the first attack that fundamentally defeats MSF-based AD perception by generating 3D adversarial objects. This demonstration will include video and figure demonstrations for the generated 3D adversarial objects and the end-to-end consequences.
-
Hyungsub Kim (Purdue University), Muslum Ozgur Ozmen (Purdue University), Antonio Bianchi (Purdue University), Z. Berkay Celik (Purdue University) and Dongyan Xu (Purdue University)
-
Ali A. Abdallah (UC Irvine), Zaher M. Kassas (UC Irvine) and Chiawei Lee (US Air Force Test Pilot School)
-
Pritam Dash (University of British Columbia) and Karthik Pattabiraman (University of British Columbia)
Robotic Vehicles (RV) rely extensively on sensor inputs to operate autonomously. Physical attacks such as sensor tampering and spoofing feed erroneous sensor measurements to deviate RVs from their course and result in mission failures. We present PID-Piper , a novel framework for automatically recovering RVs from physical attacks. We use machine learning (ML) to design an attack resilient FeedForward Controller (FFC), which runs in tandem with the RV’s primary controller and monitors it. Under attacks, the FFC takes over from the RV’s primary controller to recover the RV, and allows the RV to complete its mission successfully. Our evaluation on 6 RV systems including 3 real RVs shows that PID-Piper allows RVs to complete their missions successfully despite attacks in 83% of the cases.
-
Zhisheng Hu (Baidu), Shengjian Guo (Baidu) and Kang Li (Baidu)
In this demo, we disclose a potential bug in the Tesla Full Self-Driving (FSD) software. A vulnerable FSD vehicle can be deterministically tricked to run a red light. Attackers can cause a victim vehicle to behave in such ways without tampering or interfering with any sensors or physically accessing the vehicle. We infer that such behavior is caused by Tesla FSD’s decision system failing to take latest perception signals once it enters a specific mode. We call such problematic behavior Pringles Syndrome. Our study on multiple other autonomous driving implementations shows that this failed state update is a common failure pattern that specially needs attentions in autonomous driving software tests and developments.
-
Alireza Mohammadi (University of Michigan-Dearborn), Hafiz Malik (University of Michigan-Dearborn) and Masoud Abbaszadeh (GE Global Research)
Recent automotive hacking incidences have demonstrated that when an adversary manages to gain access to a safety-critical CAN, severe safety implications will ensue. Under such threats, this paper explores the capabilities of an adversary who is interested in engaging the car brakes at full speed and would like to cause wheel lockup conditions leading to catastrophic road injuries. This paper shows that the physical capabilities of a CAN attacker can be studied through the lens of closed-loop attack policy design. In particular, it is demonstrated that the adversary can cause wheel lockups by means of closed-loop attack policies for commanding the frictional brake actuators under a limited knowledge of the tire-road interaction characteristics. The effectiveness of the proposed wheel lockup attack policy is shown via numerical simulations under different road conditions.
-
Abdullah Zubair Mohammed (Virginia Tech), Yanmao Man (University of Arizona), Ryan Gerdes (Virginia Tech), Ming Li (University of Arizona) and Z. Berkay Celik (Purdue University)
The Controller Area Network (CAN) bus standard is the most common in-vehicle network that provides communication between Electronic Control Units (ECUs). CAN messages lack authentication and data integrity protection mechanisms and hence are vulnerable to attacks, such as impersonation and data injection, at the digital level. The physical layer of the bus allows for a one-way change of a given bit to accommodate prioritization; viz. a recessive bit (1) may be changed to a dominant one (0). In this paper, we propose a physical-layer data manipulation attack wherein multiple compromised ECUs collude to cause 0→1 (i.e., dominant to recessive) bit-flips, allowing for arbitrary bit-flips in transmitted messages. The attack is carried out by inducing transient voltages in the CAN bus that are heightened due to the parasitic reactance of the bus and non-ideal properties of the line drivers. Simulation results indicate that, with more than eight compromised ECUs, an attacker can induce a sufficient voltage drop to cause dominant bits to be flipped to recessive ones.
-
Paul Agbaje (University of Texas at Arlington), Afia Anjum (University of Texas at Arlington), Arkajyoti Mitra (University of Texas at Arlington), Gedare Bloom (University of Colorado Colorado Springs) and Habeeb Olufowobi (University of Texas at Arlington)
The landscape of automotive vehicle attack surfaces continues to grow, and vulnerabilities in the controller area network (CAN) expose vehicles to cyber-physical risks and attacks that can endanger the safety of passengers and pedestrians. Intrusion detection systems (IDS) for CAN have emerged as a key mitigation approach for these risks, but uniform methods to compare proposed IDS techniques are lacking. In this paper, we present a framework for comparative performance analysis of state-of-the-art IDSs for CAN bus to provide a consistent methodology to evaluate and assess proposed approaches. This framework relies on previously published datasets comprising message logs recorded from a real vehicle CAN bus coupled with traditional classifier performance metrics to reduce the discrepancies that arise when comparing IDS approaches from disparate sources.
-
Raymond Muller (Purdue University), Yanmao Man (University of Arizona), Z. Berkay Celik (Purdue University), Ming Li (University of Arizona) and Ryan Gerdes (Virginia Tech)
With emerging vision-based autonomous driving (AD) systems, it becomes increasingly important to have datasets to evaluate their correct operation and identify potential security flaws. However, when collecting a large amount of data, either human experts manually label potentially hundreds of thousands of image frames or systems use machine learning algorithms to label the data, with the hope that the accuracy is good enough for the application. This can become especially problematic when tracking the context information, such as the location and velocity of surrounding objects, useful to evaluate the correctness and improve stability and robustness of the AD systems.
-
Zhisheng Hu (Baidu Security), Junjie Shen (UC Irvine), Shengjian Guo (Baidu Security), Xinyang Zhang (Baidu Security), Zhenyu Zhong (Baidu Security), Qi Alfred Chen (UC Irvine) and Kang Li (Baidu Security)
Safety and security play critical roles for the success of Autonomous Driving (AD) systems. Since AD systems heavily rely on AI components, the safety and security research of such components has also received great attention in recent years. While it is widely recognized that AI component-level (mis)behavior does not necessarily lead to AD system-level impacts, most of existing work still only adopts component-level evaluation. To fill such critical scientific methodology-level gap from component-level to real system-level impact, a system-driven evaluation platform jointly constructed by the community could be the solution. In this paper, we present PASS (Platform for Auto-driving Safety and Security), a system-driven evaluation prototype based on simulation. By sharing our platform building concept and preliminary efforts, we hope to call on the community to build a uniform and extensible platform to make AI safety and security work sufficiently meaningful at the system level.
-
Takami Sato (UC Irvine) and Qi Alfred Chen (UC Irvine)
Deep Neural Network (DNN)-based lane detection is widely utilized in autonomous driving technologies. At the same time, recent studies demonstrate that adversarial attacks on lane detection can cause serious consequences on particular production-grade autonomous driving systems. However, the generality of the attacks, especially their effectiveness against other state-of-the-art lane detection approaches, has not been well studied. In this work, we report our progress on conducting the first large-scale empirical study to evaluate the robustness of 4 major types of lane detection methods under 3 types of physical-world adversarial attacks in end-to-end driving scenarios. We find that each lane detection method has different security characteristics, and in particular, some models are highly vulnerable to certain types of attack. Surprisingly, but probably not coincidentally, popular production lane centering systems properly select the lane detection approach which shows higher resistance to such attacks. In the near future, more and more automakers will include autonomous driving features in their products. We hope that our research will help as many automakers as possible to recognize the risks in choosing lane detection algorithms.
-
Barak Davidovich (Ben-Gurion University of the Negev), Ben Nassi (Ben-Gurion University of the Negev) and Yuval Elovici (Ben-Gurion University of the Negev)
In this study, we propose an innovative method for the real-time detection of GPS spoofing attacks targeting drones, based on the video stream captured by a drone’s camera. The proposed method collects frames from the video stream and their location (GPS); by calculating the correlation between each frame, our method can detect a GPS spoofing on a drone. We first analyze the performance of the suggested method in a controlled environment by conducting experiments on a flight simulator that we developed. Then, we analyze its performance in the real world using a DJI drone. Our method can provide different levels of security against GPS spoofing attacks, depending on the detection interval required; for example, it can provide a high level of security to a drone flying at an altitude of 50-100 meters over an urban area at an average speed of 4 km/h in conditions of low ambient light; in this scenario, the proposed method can provide a level of security that detects any GPS spoofing attack in which the spoofed location is a distance of 1-4 meters (an average of 2.5 meters) from the real location.
-
Mulong Luo (Cornell University) and G. Edward Suh (Cornell University)
Effective coordination of sensor inputs requires correct timestamping of the sensor data for robotic vehicles. Though the existing trusted execution environment (TEE) can prevent direct changes to timestamp values from a clock or while stored in memory by an adversary, timestamp integrity can still be compromised by an interrupt between sensor and timestamp reads. We analytically and experimentally evaluate how timestamp integrity violations affect localization of robotic vehicles. The results indicate that the interrupt attack can cause significant errors in localization, which threatens vehicle safety, and need to be prevented with additional countermeasures.
-
Khaled Serag (Purdue University), Vireshwar Kumar (IIT Delhi), Z. Berkay Celik (Purdue University), Rohit Bhatia (Purdue University), Mathias Payer (EPFL) and Dongyan Xu (Purdue University)
This demo shows how vulnerable CAN’s error handling mechanism is by presenting three recent attacks that take advantage of this mechanism.
-
Mohammed Lamine Bouchouia (Telecom Paris - Institut Polytechnique de Paris), Jean-Philippe Monteuuis (Qualcomm Technologies Inc), Houda Labiod (Telecom Paris - Institut Polytechnique de Paris), Ons Jelassi (Telecom Paris - Institut Polytechnique de Paris), Wafa Ben Jaballah (Thales) and Jonathan Petit (Qualcomm Technologies Inc)
Ensuring the safety of connected and automated vehicles (CAV) is critical for their public adoption. However, security attacks targeting CAVS are a significant deterrent to achieving public trust in AVs. Implementing and testing those attacks and corresponding countermeasures in real road conditions are costly and time-consuming tasks. Thus, an automotive simulator prevents those drawbacks. Therefore, we provide our security simulator for CAVs, which include both V2X and sensors’ data synchronized in simulation time.
-
Ben Nassi (Ben-Gurion University of the Negev), Elad Feldman (Ben-Gurion University of the Negev), Aviel Levy (Ben-Gurion University of the Negev), Yaron Pirutin (Ben-Gurion University of the Negev), Asaf Shabtai (Ben-Gurion University of the Negev), Ryusuke Masuoka (Fujitsu System Integration Laboratories) and Yuval Elovici (Ben-Gurion University of the Negev)
-
Mars Rayno (Colorado State University) and Jeremy Daily (Colorado State University)
CAN bus traces from repeated dynamic events often do not align. Dynamic Time Warping (DTW) is a tool used to efficiently align traces by time. For this demo, multiple CAN bus traces were taken from the same vehicle performing the same maneuvers. By using DTW, the similarity of the traces was able to be quantified. Specifically, CAN bus traces were compared from a heavy truck performing the same test sequence. DTW distance score showed 661 compared to the direct Euclidean distance score of 24032; this shows that utilizing DTW can accommodate differences in time during comparison of CAN traces. DTW techniques help improve pattern matching for similar driving behaviors.
-
Wenbo Ding (University at Buffalo), Long Cheng (Clemson University), Xianghang Mi (University of Science and Technology of China), Ziming Zhao (University at Buffalo) and Hongxin Hu (University at Buffalo)
Current voice assistant platforms allow users to interact with their cars through voice commands. However, this convenience comes with substantial cyber-risk to voice-controlled vehicles. In this demo, we show a “malicious” skill with unwanted control actions on the Alexa system could hijack voice commands that are supposed to be sent to a benign third-party connected vehicle skill.
-
To be added
Speaker's BiographyKell Rozman is the Security Software Engineering Senior Manager responsible for Toyota Motor North America's Security Engineering organization including the cloud security, application security, red team, security analytics, security automation and anything else thrown his way. His efforts allow Toyota to develop secure connected car applications infrastructure while pioneering the latest technology to enable Toyota's journey to becoming a mobility company. He has a diverse background in information security in both commercial and defense applications. Helping to protect the nation’s defense and commercial companies by solving complex cyber security and mission awareness problems for DARPA, DoD, Boeing, Lockheed Martin, a Managed Detection Response organizations. He’s worked in the domains of threat hunting, security analytics, malware attribution, penetration testing, threat intelligence, and software assurance on state-of-the-art solutions that directly impact the growing threat posed by malicious cyber actors.
-
Shaik Sabiha (University at Buffalo), Keyan Guo (University at Buffalo), Foad Hajiaghajani (University at Buffalo), Chunming Qiao (University at Buffalo), Hongxin Hu (University at Buffalo) and Ziming Zhao (University at Buffalo)
Light Detection And Ranging (LiDAR) is a critical component in autonomous vehicles that aid in object detection. It generates point clouds by projecting light rays to its surroundings. This demo studies the effect of paint colors on autonomous driving perception. The experiment results show different colors do affect LiDAR sensor’s point cloud intensity.
-
Ziwen Wan (UC Irvine), Junjie Shen (UC Irvine), Jalen Chuang (UC Irvine), Xin Xia (UCLA), Joshua Garcia (UC Irvine), Jiaqi Ma (UCLA) and Qi Alfred Chen (UC Irvine)
-
Yi Zhu (State University of New York at Buffalo), Chenglin Miao (University of Georgia), Foad Hajiaghajani (State University of New York at Buffalo), Mengdi Huai (University of Virginia), Lu Su (Purdue University) and Chunming Qiao (State University of New York at Buffalo)
As a fundamental task in autonomous driving, LiDAR semantic segmentation aims to provide semantic understanding of the driving environment. We demonstrate that existing LiDAR semantic segmentation models in autonomous driving systems can be easily fooled by placing some simple objects on the road, such as cardboard and traffic signs. We show that this type of attack can hide a vehicle and change the road surface to road-side vegetation.
-
Zachariah Threet (Tennessee Tech), Christos Papadopoulos (University of Memphis), Proyash Poddar (Florida International University), Alex Afanasyev (Florida International University), William Lambert (Tennessee Tech), Haley Burnell (Tennessee Tech), Sheikh Ghafoor (Tennessee Tech) and Susmit Shannigrahi (Tennessee Tech)
Data-centric architectures are a candidate for in-vehicle communication. They add naming standardization, data provenance, security, and improve interoperability between different ECUs and networks. In this demo, We demonstrate the feasibility and advantages of data-centric architectures through Named Data Networking (NDN). We deploy a bench-top testbed using Raspberry PIs and replay real CAN data.
-
Yulong Cao (University of Michigan), Yanan Guo (University of Pittsburgh), Takami Sato (UC Irvine), Qi Alfred Chen (UC Irvine), Z. Morley Mao (University of Michigan) and Yueqiang Cheng (NIO)
Advanced driver-assistance systems (ADAS) are widely used by modern vehicle manufacturers to automate, adapt and enhance vehicle technology for safety and better driving. In this work, we design a practical attack against automated lane centering (ALC), a crucial functionality of ADAS, with remote adversarial patches. We identify that the back of a vehicle is an effective attack vector and improve the attack robustness by considering various input frames. The demo includes videos that show our attack can divert victim vehicle out of lane on a representative ADAS, Openpilot, in a simulator.
-
Pablo Moriano (Oak Ridge National Laboratory), Robert A. Bridges (Oak Ridge National Laboratory) and Michael D. Iannacone (Oak Ridge National Laboratory)
Vehicular Controller Area Networks (CANs) are susceptible to cyber attacks of different levels of sophistication. Fabrication attacks are the easiest to administer—an adversary simply sends (extra) frames on a CAN—but also the easiest to detect because they disrupt frame frequency. To overcome time-based detection methods, adversaries must administer masquerade attacks by sending frames in lieu of (and therefore at the expected time of) benign frames but with malicious payloads. Research efforts have proven that CAN attacks, and masquerade attacks in particular, can affect vehicle functionality. Examples include causing unintended acceleration, deactivation of vehicle’s brakes, as well as steering the vehicle. We hypothesize that masquerade attacks modify the nuanced correlations of CAN signal time series and how they cluster together. Therefore, changes in cluster assignments should indicate anomalous behavior. We confirm this hypothesis by leveraging our previously developed capability for reverse engineering CAN signals (i.e., CAN-D [Controller Area Network Decoder]) and focus on advancing the state of the art for detecting masquerade attacks by analyzing time series extracted from raw CAN frames. Specifically, we demonstrate that masquerade attacks can be detected by computing time series clustering similarity using hierarchical clustering on the vehicle’s CAN signals (time series) and comparing the clustering similarity across CAN captures with and without attacks. We test our approach in a previously collected CAN dataset with masquerade attacks (i.e., the ROAD dataset) and develop a forensic tool as a proof of concept to demonstrate the potential of the proposed approach for detecting CAN masquerade attacks.
-
Edwin Yang (University of Oklahoma) and Song Fang (University of Oklahoma)
With the advent of the in-vehicle infotainment (IVI) systems (e.g., Android Automotive) and other portable devices (e.g., smartphones) that may be brought into a vehicle, it becomes crucial to establish a secure channel between the vehicle and an in-vehicle device or between two in-vehicle devices. Traditional pairing schemes are tedious, as they require user interaction (e.g., manually typing in a passcode or bringing the two devices close to each other). Modern vehicles, together with smartphones and many emerging Internet-of-things (IoT) devices (e.g., dashcam) are often equipped with built-in Global Positioning System (GPS) receivers. In this paper, we propose a GPS-based Key establishment technique, called GPSKey, by leveraging the inherent randomness of vehicle movement. Specifically, vehicle movement changes with road ground conditions, traffic situations, and pedal operations. It thus may have rich randomness. Meanwhile, two in-vehicle GPS receivers can observe the same vehicle movement and exploit it for key establishment without requiring user interaction. We implement a prototype of GPSKey on top of off-the-shelf devices. Experimental results show that legitimate devices in the same vehicle require 1.18-minute of driving on average to establish a 128-bit key. Meanwhile, the attacker who follows or leads the victim’s vehicle is unable to infer the key.
-
Alireza Mohammadi (University of Michigan-Dearborn) and Hafiz Malik (University of Michigan-Dearborn)
Motivated by ample evidence in the automotive cybersecurity literature that the car brake ECUs can be maliciously reprogrammed, it has been shown that an adversary who can directly control the frictional brake actuators can induce wheel lockup conditions despite having a limited knowledge of the tire-road interaction characteristics. In this paper, we investigate the destabilizing effect of such wheel lockup attacks on the lateral motion stability of vehicles from a robust stability perspective. Furthermore, we propose a quadratic programming (QP) problem that the adversary can solve for finding the optimal destabilizing longitudinal slip reference values.
-
Bo Yang (Zhejiang University), Yushi Cheng (Tsinghua University), Zizhi Jin (Zhejiang University), Xiaoyu Ji (Zhejiang University) and Wenyuan Xu (Zhejiang University)
Due to the booming of autonomous driving, in which LiDAR plays a critical role in the task of environment perception, its reliability issues have drawn much attention recently. LiDARs usually utilize deep neural models for 3D point cloud perception, which have been demonstrated to be vulnerable to imperceptible adversarial examples. However, prior work usually manipulates point clouds in the digital world without considering the physical working principle of the actual LiDAR. As a result, the generated adversarial point clouds may be realizable and effective in simulation but cannot be perceived by physical LiDARs. In this work, we introduce the physical principle of LiDARs and propose a new method for generating 3D adversarial point clouds in accord with it that can achieve two types of spoofing attacks: object hiding and object creating. We also evaluate the effectiveness of the proposed method with two 3D object detectors on the KITTI vision benchmark.
-
Yunpeng Luo (UC Irvine), Ningfei Wang (UC Irvine), Bo Yu (PerceptIn), Shaoshan Liu (PerceptIn) and Qi Alfred Chen (UC Irvine)
Autonomous Driving (AD) is a rapidly developing technology and its security issues have been studied by various recent research works. With the growing interest and investment in leveraging intelligent infrastructure support for practical AD, AD system may have new opportunities to defend against existing AD attacks. In this paper, we are the first t o systematically explore such a new AD security design space leveraging emerging infrastructure-side support, which we call Infrastructure-Aided Autonomous Driving Defense (I-A2D2). We first taxonomize existing AD attacks based on infrastructure-side capabilities, and then analyze potential I-A2D2 design opportunities and requirements. We further discuss the potential design challenges for these I-A2D2 design directions to be effective in practice.
-
Mohit Kumar Jangid (Ohio State University) and Zhiqiang Lin (Ohio State University)
Being safer, cleaner, and more efficient, connected and autonomous vehicles (CAVs) are expected to be the dominant vehicles of future transportation systems. However, there are enormous security and privacy challenges while also considering the efficiency and and scalability. One key challenge is how to efficiently authenticate a vehicle in the ad-hoc CAV network and ensure its tamper-resistance, accountability, and non-repudiation. In this paper, we present the design and implementation of Vehicle-to-Vehicle (V2V) protocol by leveraging trusted execution environment (TEE), and show how this TEE-based protocol achieves the objective of authentication, privacy, accountability and revocation as well as the scalability and efficiency. We hope t hat our TEE-based V2V protocol can inspire further research into CAV security and privacy, particularly how to leverage TEE to solve some of the hard problems and make CAV closer to practice.
-
Aiping Xiong (Pennsylvania State University), Zekun Cai (Pennsylvania State University) and Tianhao Wang (University of Virginia)
Individuals’ interactions with connected autonomous vehicles (CAVs) involve sharing various data in a ubiquitous manner, raising novel challenges for privacy. The human factors of privacy must first be understood to promote consumers’ acceptance of CAVs. To inform the privacy research in the context of CAVs, we discuss how the emerging technologies development of CAV poses new privacy challenges for drivers and passengers. We argue that the privacy design of CAVs should adopt a user-centered approach, which integrates human factors into the development and deployment of privacy-enhancing technologies, such as differential privacy.
-
-
-
David Balenson (SRI International)