Laura Matzen, Michelle A Leger, Geoffrey Reedy (Sandia National Laboratories)

Binary reverse engineers combine automated and manual techniques to answer questions about software. However, when evaluating automated analysis results, they rarely have additional information to help them contextualize these results in the binary. We expect that humans could more readily understand the binary program and these analysis results if they had access to information usually kept internal to the analysis, like value-set analysis (VSA) information. However, these automated analyses often give up precision for scalability, and imprecise information might hinder human decision making.

To assess how precision of VSA information affects human analysts, we designed a human study in which reverse engineers answered short information flow problems, determining whether code snippets would print sensitive information. We hypothesized that precise VSA information would help our participants analyze code faster and more accurately, and that imprecise VSA information would lead to slower, less accurate performance than no VSA information. We presented hand-crafted code snippets with precise, imprecise, or no VSA information in a blocked design, recording participants’ eye movements, response times, and accuracy while they analyzed the snippets. Our data showed that precise VSA information changed participants’ problem-solving strategies and supported faster, more accurate analyses. However, surprisingly, imprecise VSA information also led to increased accuracy relative to no VSA information, likely due to the extra time participants spent working through the code.

View More Papers

More than a Fair Share: Network Data Remanence Attacks...

Leila Rashidi (University of Calgary), Daniel Kostecki (Northeastern University), Alexander James (University of Calgary), Anthony Peterson (Northeastern University), Majid Ghaderi (University of Calgary), Samuel Jero (MIT Lincoln Laboratory), Cristina Nita-Rotaru (Northeastern University), Hamed Okhravi (MIT Lincoln Laboratory), Reihaneh Safavi-Naini (University of Calgary)

Read More

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Hai Huang (Tsinghua University), Jiaming Mu (Tsinghua University), Neil Zhenqiang Gong (Duke University), Qi Li (Tsinghua University), Bin Liu (West Virginia University), Mingwei Xu (Tsinghua University)

Read More

Demo #8: Security of Camera-based Perception for Autonomous Driving...

Christopher DiPalma, Ningfei Wang, Takami Sato, and Qi Alfred Chen (UC Irvine)

Read More

PHOENIX: Device-Centric Cellular Network Protocol Monitoring using Runtime Verification

Mitziu Echeverria (The University of Iowa), Zeeshan Ahmed (The University of Iowa), Bincheng Wang (The University of Iowa), M. Fareed Arif (The University of Iowa), Syed Rafiul Hussain (Pennsylvania State University), Omar Chowdhury (The University of Iowa)

Read More