Sun Hyoung Kim (Penn State), Cong Sun (Xidian University), Dongrui Zeng (Penn State), Gang Tan (Penn State)

Enforcing fine-grained Control-Flow Integrity (CFI) is critical for increasing software security. However, for commercial off-the-shelf (COTS) binaries, constructing high-precision Control-Flow Graphs (CFGs) is challenging, because there is no source-level information, such as symbols and types, to assist in indirect-branch target inference. The lack of source-level information brings extra challenges to inferring targets for indirect calls compared to other kinds of indirect branches. Points-to analysis could be a promising solution for this problem, but there is no practical points-to analysis framework for inferring indirect call targets at the binary level. Value set analysis (VSA) is the state-of-the-art binary-level points-to analysis but does not scale to large programs. It is also highly conservative by design and thus leads to low-precision CFG construction. In this paper, we present a binary-level points-to analysis framework called BPA to construct sound and high-precision CFGs. It is a new way of performing points-to analysis at the binary level with the focus on resolving indirect call targets. BPA employs several major techniques, including assuming a block memory model and a memory access analysis for partitioning memory into blocks, to achieve a better balance between scalability and precision. In evaluation, we demonstrate that BPA achieves a 34.5% precision improvement rate over the current state-of-the-art technique without introducing false negatives.

View More Papers

Evading Voltage-Based Intrusion Detection on Automotive CAN

Rohit Bhatia (Purdue University), Vireshwar Kumar (Indian Institute of Technology Delhi), Khaled Serag (Purdue University), Z. Berkay Celik (Purdue University), Mathias Payer (EPFL), Dongyan Xu (Purdue University)

Read More

All the Numbers are US: Large-scale Abuse of Contact...

Christoph Hagen (University of Würzburg), Christian Weinert (TU Darmstadt), Christoph Sendner (University of Würzburg), Alexandra Dmitrienko (University of Würzburg), Thomas Schneider (TU Darmstadt)

Read More

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Hai Huang (Tsinghua University), Jiaming Mu (Tsinghua University), Neil Zhenqiang Gong (Duke University), Qi Li (Tsinghua University), Bin Liu (West Virginia University), Mingwei Xu (Tsinghua University)

Read More

Hey Alexa, is this Skill Safe?: Taking a Closer...

Christopher Lentzsch (Ruhr-Universität Bochum), Sheel Jayesh Shah (North Carolina State University), Benjamin Andow (Google), Martin Degeling (Ruhr-Universität Bochum), Anupam Das (North Carolina State University), William Enck (North Carolina State University)

Read More