Dongqi Han (Tsinghua University), Zhiliang Wang (Tsinghua University), Wenqi Chen (Tsinghua University), Kai Wang (Tsinghua University), Rui Yu (Tsinghua University), Su Wang (Tsinghua University), Han Zhang (Tsinghua University), Zhihua Wang (State Grid Shanghai Municipal Electric Power Company), Minghui Jin (State Grid Shanghai Municipal Electric Power Company), Jiahai Yang (Tsinghua University), Xingang Shi (Tsinghua University), Xia…

Concept drift is one of the most frustrating challenges for learning-based security applications built on the close-world assumption of identical distribution between training and deployment. Anomaly detection, one of the most important tasks in security domains, is instead immune to the drift of abnormal behavior due to the training without any abnormal data (known as zero-positive), which however comes at the cost of more severe impacts when normality shifts. However, existing studies mainly focus on concept drift of abnormal behaviour and/or supervised learning, leaving the normality shift for zero-positive anomaly detection largely unexplored.

In this work, we are the first to explore the normality shift for deep learning-based anomaly detection in security applications, and propose OWAD, a general framework to detect, explain and adapt to normality shift in practice. In particular, OWAD outperforms prior work by detecting shift in an unsupervised fashion, reducing the overhead of manual labeling, and providing better adaptation performance through distribution-level tackling. We demonstrate the effectiveness of OWAD through several realistic experiments on three security-related anomaly detection applications with long-term practical data. Results show that OWAD can provide better adaptation performance of normality shift with less labeling overhead. We provide case studies to analyze the normality shift and provide operational recommendations for security applications. We also conduct an initial real-world deployment on a SCADA security system.

View More Papers

Backdoor Attacks Against Dataset Distillation

Yugeng Liu (CISPA Helmholtz Center for Information Security), Zheng Li (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yun Shen (Netapp), Yang Zhang (CISPA Helmholtz Center for Information Security)

Read More

Sometimes, You Aren’t What You Do: Mimicry Attacks against...

Akul Goyal (University of Illinois at Urbana-Champaign), Xueyuan Han (Wake Forest University), Gang Wang (University of Illinois at Urbana-Champaign), Adam Bates (University of Illinois at Urbana-Champaign)

Read More

WIP: Practical Removal Attacks on LiDAR-based Object Detection in...

Takami Sato (University of California, Irvine), Yuki Hayakawa (Keio University), Ryo Suzuki (Keio University), Yohsuke Shiiki (Keio University), Kentaro Yoshioka (Keio University), Qi Alfred Chen (University of California, Irvine)

Read More