Akul Goyal (University of Illinois at Urbana-Champaign), Xueyuan Han (Wake Forest University), Gang Wang (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign)
Reliable methods for host-layer intrusion detection remained an open problem within computer security. Recent research has recast intrusion detection as a provenance graph anomaly detection problem thanks to concurrent advancements in machine learning and causal graph auditing. While these approaches show promise, their robustness against an adaptive adversary has yet to be proven. In particular, it is unclear if mimicry attacks, which plagued past approaches to host intrusion detection, have a similar effect on modern graph-based methods.
In this work, we reveal that systematic design choices have allowed mimicry attacks to continue to abound in provenance graph host intrusion detection systems (Prov-HIDS). Against a corpus of exemplar Prov-HIDS, we develop evasion tactics that allow attackers to hide within benign process behaviors. Evaluating against public datasets, we demonstrate that an attacker can consistently evade detection (100% success rate) without modifying the underlying attack behaviors. We go on to show that our approach is feasible in live attack scenarios and outperforms domain-general adversarial sample techniques. Through open sourcing our code and datasets, this work will serve as a benchmark for the evaluation of future Prov-HIDS.