Hamed Haddadpajouh (University of Guelph), Ali Dehghantanha (University of Guelph)

As the integration of Internet of Things devices continues to increase, the security challenges associated with autonomous, self-executing Internet of Things devices become increasingly critical. This research addresses the vulnerability of deep learning-based malware threat-hunting models, particularly in the context of Industrial Internet of Things environments. The study introduces an innovative adversarial machine learning attack model tailored for generating adversarial payloads at the bytecode level of executable files.

Our investigation focuses on the Malconv malware threat hunting model, employing the Fast Gradient Sign methodology as the attack model to craft adversarial instances. The proposed methodology is systematically evaluated using a comprehensive dataset sourced from instances of cloud-edge Internet of Things malware. The empirical findings reveal a significant reduction in the accuracy of the malware threat-hunting model, plummeting from an initial 99% to 82%. Moreover, our proposed approach sheds light on the effectiveness of adversarial attacks leveraging code repositories, showcasing their ability to evade AI-powered malware threat-hunting mechanisms.

This work not only offers a practical solution for bolstering deep learning-based malware threat-hunting models in Internet of Things environments but also underscores the pivotal role of code repositories as a potential attack vector. The outcomes of this investigation emphasize the imperative need to recognize code repositories as a distinct attack surface within the landscape of malware threat-hunting models deployed in the Internet of Things environments.

View More Papers

Improving Adoption of Home IoT Beyond Single-Family Homes: Delineating...

Tushar M. Jois (City College of New York), Susan Landau (Tufts University)

Read More

Private Aggregate Queries to Untrusted Databases

Syed Mahbub Hafiz (University of California, Davis), Chitrabhanu Gupta (University of California, Davis), Warren Wnuck (University of California, Davis), Brijesh Vora (University of California, Davis), Chen-Nee Chuah (University of California, Davis)

Read More

Automatic Adversarial Adaption for Stealthy Poisoning Attacks in Federated...

Torsten Krauß (University of Würzburg), Jan König (University of Würzburg), Alexandra Dmitrienko (University of Wuerzburg), Christian Kanzow (University of Würzburg)

Read More