Qi Wang (University of Illinois Urbana-Champaign), Wajih Ul Hassan (University of Illinois Urbana-Champaign), Ding Li (NEC Laboratories America, Inc.), Kangkook Jee (University of Texas at Dallas), Xiao Yu (NEC Laboratories America, Inc.), Kexuan Zou (University Of Illinois Urbana-Champaign), Junghwan Rhee (NEC Laboratories America, Inc.), Zhengzhang Chen (NEC Laboratories America, Inc.), Wei Cheng (NEC Laboratories America,…

To subvert recent advances in perimeter and host security, the attacker community has developed and employed various attack vectors to make a malware much more stealthy than before to penetrate the target system and prolong its presence. The advanced malware, or stealthy malware, impersonates or abuses benign applications and legitimate system tools to minimize its footprints in the target system. One example of such stealthy malware is fileless malware, which resides its malicious logic mostly in the memory of well-trusted processes. It is difficult for traditional detection tools, such as malware scanners, to detect it, as the malware normally does not expose its malicious payload in a file and hides its malicious behaviors among the benign behaviors of the processes.

In this paper, we present PROVDETECTOR, a provenance-based approach for detecting stealthy malware. The intuition behind PROVDETECTOR is that although a stealthy malware may impersonate or abuse a benign process, it still exposes its malicious behaviors in the OS (operating system) level provenance. Based on this intuition, PROVDETECTOR first employs a novel selection algorithm to identify possibly malicious parts in the OS level provenance data of a process. Then, it applies a neural embedding and machine learning pipeline to automatically detect any behavior that deviates significantly from normal behaviors. We evaluate our approach on a large provenance dataset from an enterprise network and demonstrate that it achieves very high detection performance (an average F1 score of 0.974) of stealthy malware. Further, we conduct thorough interpretability studies to understand the internals of the learned machine learning models.

View More Papers

SymTCP: Eluding Stateful Deep Packet Inspection with Automated Discrepancy...

Zhongjie Wang (University of California, Riverside), Shitong Zhu (University of California, Riverside), Yue Cao (University of California, Riverside), Zhiyun Qian (University of California, Riverside), Chengyu Song (University of California, Riverside), Srikanth V. Krishnamurthy (University of California, Riverside), Kevin S. Chan (U.S. Army Research Lab), Tracy D. Braun (U.S. Army Research Lab)

Read More

HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing

William Blair (Boston University), Andrea Mambretti (Northeastern University), Sajjad Arshad (Northeastern University), Michael Weissbacher (Northeastern University), William Robertson (Northeastern University), Engin Kirda (Northeastern University), Manuel Egele (Boston University)

Read More

Custos: Practical Tamper-Evident Auditing of Operating Systems Using Trusted...

Riccardo Paccagnella (University of Illinois at Urbana–Champaign), Pubali Datta (University of Illinois at Urbana–Champaign), Wajih Ul Hassan (University of Illinois at Urbana–Champaign), Adam Bates (University of Illinois at Urbana–Champaign), Christopher W. Fletcher (University of Illinois at Urbana–Champaign), Andrew Miller (University of Illinois at Urbana–Champaign), Dave Tian (Purdue University)

Read More

ConTExT: A Generic Approach for Mitigating Spectre

Michael Schwarz (Graz University of Technology), Moritz Lipp (Graz University of Technology), Claudio Canella (Graz University of Technology), Robert Schilling (Graz University of Technology and Know-Center GmbH), Florian Kargl (Graz University of Technology), Daniel Gruss (Graz University of Technology)

Read More