Gelei Deng (Nanyang Technological University), Yi Liu (Nanyang Technological University), Yuekang Li (University of New South Wales), Kailong Wang (Huazhong University of Science and Technology), Ying Zhang (Virginia Tech), Zefeng Li (Nanyang Technological University), Haoyu Wang (Huazhong University of Science and Technology), Tianwei Zhang (Nanyang Technological University), Yang Liu (Nanyang Technological University)

Large language models (LLMs), such as chatbots, have made significant strides in various fields but remain vulnerable to jailbreak attacks, which aim to elicit inappropriate responses. Despite efforts to identify these weaknesses, current strategies are ineffective against mainstream LLM chatbots, mainly due to undisclosed defensive measures by service providers. Our paper introduces MASTERKEY, a framework exploring the dynamics of jailbreak attacks and countermeasures. We present a novel method based on time-based characteristics to dissect LLM chatbot defenses. This technique, inspired by time-based SQL injection, uncovers the workings of these defenses and demonstrates a proof-of-concept attack on several LLM chatbots.

Additionally, MASTERKEY features an innovative approach for automatically generating jailbreak prompts that target well-defended LLM chatbots. By fine-tuning an LLM with jailbreak prompts, we create attacks with a 21.58% success rate, significantly higher than the 7.33% achieved by existing methods. We have informed service providers of these findings, highlighting the urgent need for stronger defenses. This work not only reveals vulnerabilities in LLMs but also underscores the importance of robust defenses against such attacks.

View More Papers

Exploiting Diagnostic Protocol Vulnerabilities on Embedded Networks in Commercial...

Carson Green, Rik Chatterjee, Jeremy Daily (Colorado State University)

Read More

Invisible Reflections: Leveraging Infrared Laser Reflections to Target Traffic...

Takami Sato (University of California Irvine), Sri Hrushikesh Varma Bhupathiraju (University of Florida), Michael Clifford (Toyota InfoTech Labs), Takeshi Sugawara (The University of Electro-Communications), Qi Alfred Chen (University of California, Irvine), Sara Rampazzi (University of Florida)

Read More

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural...

Gorka Abad (Radboud University & Ikerlan Technology Research Centre), Oguzhan Ersoy (Radboud University), Stjepan Picek (Radboud University & Delft University of Technology), Aitor Urbieta (Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA))

Read More

AutoWatch: Learning Driver Behavior with Graphs for Auto Theft...

Paul Agbaje, Abraham Mookhoek, Afia Anjum, Arkajyoti Mitra (University of Texas at Arlington), Mert D. Pesé (Clemson University), Habeeb Olufowobi (University of Texas at Arlington)

Read More