Workshop on Binary Analysis Research (BAR) 2025 Program
Friday, 28 February
-
Jack W. Davidson, Professor of Computer Science in the School of Engineering and Applied Science, University of Virginia
For the past twenty years, our research has been driven by the need to analyze, understand, and transform software without access to source code. Through a series of research programs, including DARPA’s Self-Regenerative Systems (SRS), AFOSR’s Enterprise Health: Self-Regenerative Incorruptible Enterprise program, IARPA’s Securely Taking on New Executable Software of Uncertain Provenance (STONESOUP) program, DARPA’s Cyber Grand Challenge (CGC), and DARPA’s Cyber Fault-Tolerant Attack and Recovery program (CFAR), and others, we have developed novel techniques to analyze and transform binaries. This talk will retrospectively examine these efforts and our key contributions in binary analysis and rewriting, from early vulnerability discovery techniques to advanced automated program transformations. We will also discuss current binary analysis research areas, speculate on where binary analysis research is heading, and why it continues to be an important, well-funded and impactful research area.
Speaker's Biography: Jack W. Davidson is a Professor of Computer Science in the School of Engineering and Applied Science at the University of Virginia. Professor Davidson is a Fellow of the ACM and a Life Fellow of the IEEE. He served as an Associate Editor of ACM’s Transactions on Programming Languages and Systems for six years, and as an Associate Editor of ACM’s Transactions on Architecture and Compiler Optimizations for eight years. He served as Chair of ACM’s Special Interest Group on Programming Languages (SIGPLAN) from 2005 to 2007. He currently serves on the ACM Executive Council and is chair of ACM’s Digital Library Board that oversees the operation and development of ACM’s Digital Library.
-
-
Jack Royer (CentraleSupélec), Frédéric TRONEL (CentraleSupélec, Inria, CNRS, University of Rennes), Yaëlle Vinçont (Univ Rennes, Inria, CNRS, IRISA)
-
Ahmed Mostafa, Raisul Arefin Nahid, Samuel Mulder (Auburn University)
-
Dairo de Ruck, Jef Jacobs, Jorn Lapon, Vincent Naessens (DistriNet, KU Leuven, 3001 Leuven, Belgium)
-
Caleb Stewart, Rhonda Gaede, Jeffrey Kulick (University of Alabama in Huntsville)
-
Heng Yin, Professor, Department of Computer Science and Engineering, University of California, Riverside
Deep learning, particularly Transformer-based models, has recently gained traction in binary analysis, showing promising outcomes. Despite numerous studies customizing these models for specific applications, the impact of such modifications on performance remains largely unexamined. Our study critically evaluates four custom Transformer models (jTrans, PalmTree, StateFormer, Trex) across various applications, revealing that except for the Masked Language Model (MLM) task, additional pre-training tasks do not significantly enhance learning. Surprisingly, the original BERT model often outperforms these adaptations, indicating that complex modifications and new pre-training tasks may be superfluous. Our findings advocate for focusing on fine-tuning rather than architectural or task-related alterations to improve model performance in binary analysis.
Speaker's Biography: Dr. Heng Yin is a Professor in the Department of Computer Science and Engineering at University of California, Riverside. He obtained his PhD degree from the College of William and Mary in 2009. His research interests lie in computer security, with an emphasis on binary code analysis. His publications appear in top-notch technical conferences and journals, such as IEEE S&P, ACM CCS, USENIX Security, NDSS, ISSTA, ICSE, TSE, TDSC, etc. His research is sponsored by National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Air Force Office of Scientific Research (AFOSR), and Office of Naval Research (ONR). In 2011, he received the prestigious NSF Career award. He received Google Security and Privacy Research Award, Amazon Research Award, DSN Distinguished Paper Award, and RAID Best Paper Award.
-
-
Rachael Little, Dongpeng Xu (University of New Hampshire)
-
Caleb Helbling, Graham Leach-Krouse, Sam Lasser, Greg Sullivan (Draper)
-
Andrew Fasano, Zachary Estrada, Luke Craig, Ben Levy, Jordan McLeod, Jacques Becker, Elysia Witham, Cole DiLorenzo, Caden Kline, Ali Bobi (MIT Lincoln Laboratory), Dinko Dermendzhiev (Georgia Institute of Technology), Tim Leek (MIT Lincoln Laboratory), William Robertson (Northeastern University)
-
Sima Arasteh (University of Southern California), Pegah Jandaghi, Nicolaas Weideman (University of Southern California/Information Sciences Institute), Dennis Perepech, Mukund Raghothaman (University of Southern California), Christophe Hauser (Dartmouth College), Luis Garcia (University of Utah Kahlert School of Computing)