Heng Yin, Professor, Department of Computer Science and Engineering, University of California, Riverside

Deep learning, particularly Transformer-based models, has recently gained traction in binary analysis, showing promising outcomes. Despite numerous studies customizing these models for specific applications, the impact of such modifications on performance remains largely unexamined. Our study critically evaluates four custom Transformer models (jTrans, PalmTree, StateFormer, Trex) across various applications, revealing that except for the Masked Language Model (MLM) task, additional pre-training tasks do not significantly enhance learning. Surprisingly, the original BERT model often outperforms these adaptations, indicating that complex modifications and new pre-training tasks may be superfluous. Our findings advocate for focusing on fine-tuning rather than architectural or task-related alterations to improve model performance in binary analysis.

Speaker's Biography: Dr. Heng Yin is a Professor in the Department of Computer Science and Engineering at University of California, Riverside. He obtained his PhD degree from the College of William and Mary in 2009. His research interests lie in computer security, with an emphasis on binary code analysis. His publications appear in top-notch technical conferences and journals, such as IEEE S&P, ACM CCS, USENIX Security, NDSS, ISSTA, ICSE, TSE, TDSC, etc. His research is sponsored by National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Air Force Office of Scientific Research (AFOSR), and Office of Naval Research (ONR). In 2011, he received the prestigious NSF Career award. He received Google Security and Privacy Research Award, Amazon Research Award, DSN Distinguished Paper Award, and RAID Best Paper Award.

View More Papers

Home Shield IoT Traffic Analyzer: A Comprehensive Analysis of...

Dhananjai Bajpai (Marquette University), Keyang Yu (Marquette University)

Read More

Was This You? Investigating the Design Considerations for Suspicious...

Sena Sahin (Georgia Institute of Technology), Burak Sahin (Georgia Institute of Technology), Frank Li (Georgia Institute of Technology)

Read More

Oreo: Protecting ASLR Against Microarchitectural Attacks

Shixin Song (Massachusetts Institute of Technology), Joseph Zhang (Massachusetts Institute of Technology), Mengjia Yan (Massachusetts Institute of Technology)

Read More

PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented...

Ye Liu (Singapore Management University), Yue Xue (MetaTrust Labs), Daoyuan Wu (The Hong Kong University of Science and Technology), Yuqiang Sun (Nanyang Technological University), Yi Li (Nanyang Technological University), Miaolei Shi (MetaTrust Labs), Yang Liu (Nanyang Technological University)

Read More