Kaiming Huang (Penn State University), Yongzhe Huang (Penn State University), Mathias Payer (EPFL), Zhiyun Qian (UC Riverside), Jack Sampson (Penn State University), Gang Tan (Penn State University), Trent Jaeger (Penn State University)

Despite vast research on defenses to protect stack objects from the exploitation of memory errors, much stack data remains at risk. Historically, stack defenses focus on the protection of code pointers, such as return addresses, but emerging techniques to exploit memory errors motivate the need for practical solutions to protect stack data objects as well. However, recent approaches provide an incomplete view of security by not accounting for memory errors comprehensively and by limiting the set of objects that can be protected unnecessarily. In this paper, we present the DataGuard system that identifies which stack objects are safe statically from spatial, type, and temporal memory errors to protect those objects efficiently. DataGuard improves security through a more comprehensive and accurate safety analysis that proves a larger number of stack objects are safe from memory errors, while ensuring that no unsafe stack objects are mistakenly classified as safe. DataGuard's analysis of server programs and the SPEC CPU2006 benchmark suite shows that DataGuard improves security by: (1) ensuring that no memory safety violations are possible for any stack objects classified as safe, removing 6.3% of the stack objects previously classified safe by the Safe Stack method, and (2) blocking exploit of all 118 stack vulnerabilities in the CGC Binaries. DataGuard extends the scope of stack protection by validating as safe over 70% of the stack objects classified as unsafe by the Safe Stack method, leading to an average of 91.45% of all stack objects that can only be referenced safely. By identifying more functions with only safe stack objects, DataGuard reduces the overhead of using Clang's Safe Stack defense for protection of the SPEC CPU2006 benchmarks from 11.3% to 4.3%. Thus, DataGuard shows that a comprehensive and accurate analysis can both increase the scope of stack data protection and reduce overheads.

View More Papers

Demo #13: Attacking LiDAR Semantic Segmentation in Autonomous Driving

Yi Zhu (State University of New York at Buffalo), Chenglin Miao (University of Georgia), Foad Hajiaghajani (State University of New York at Buffalo), Mengdi Huai (University of Virginia), Lu Su (Purdue University) and Chunming Qiao (State University of New York at Buffalo)

Read More

HeadStart: Efficiently Verifiable and Low-Latency Participatory Randomness Generation at...

Hsun Lee (National Taiwan University), Yuming Hsu (National Taiwan University), Jing-Jie Wang (National Taiwan University), Hao Cheng Yang (National Taiwan University), Yu-Heng Chen (National Taiwan University), Yih-Chun Hu (University of Illinois at Urbana-Champaign), Hsu-Chun Hsiao (National Taiwan University)

Read More

Demo #12: Too Afraid to Drive: Systematic Discovery of...

Ziwen Wan (UC Irvine), Junjie Shen (UC Irvine), Jalen Chuang (UC Irvine), Xin Xia (UCLA), Joshua Garcia (UC Irvine), Jiaqi Ma (UCLA) and Qi Alfred Chen (UC Irvine)

Read More

On Utility and Privacy in Synthetic Genomic Data

Bristena Oprisanu (UCL), Georgi Ganev (UCL & Hazy), Emiliano De Cristofaro (UCL)

Read More