< Explain other AI papers

The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections

Milad Nasr, Nicholas Carlini, Chawin Sitawarin, Sander V. Schulhoff, Jamie Hayes, Michael Ilie, Juliette Pluto, Shuang Song, Harsh Chaudhari, Ilia Shumailov, Abhradeep Thakurta, Kai Yuanqing Xiao, Andreas Terzis, Florian Tramèr

2025-10-14

The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections

Summary

This paper investigates how well current methods for protecting large language models from malicious attacks actually work, arguing that the way we test these defenses is too weak and gives a false sense of security.

What's the problem?

Currently, defenses against attacks that try to trick language models into giving harmful information or performing unwanted actions are tested using either simple lists of attack phrases or basic attack strategies. The issue is that these tests don't represent a smart attacker who would actively try to find ways *around* the defense. It's like testing a lock by just wiggling the key a little – a determined thief would try more sophisticated methods.

What's the solution?

The researchers developed more powerful and adaptable attack methods, using techniques like gradient descent, reinforcement learning, random searching, and even getting help from humans to refine the attacks. They then used these improved attacks against twelve different recent defense systems. They found that they could bypass almost all of them, achieving a success rate of over 90% in most cases, even though the original papers claimed those defenses were very effective and had almost zero success rates for attacks.

Why it matters?

This research is important because it shows that many language model defenses aren't as strong as they appear. It highlights the need for more realistic and challenging testing methods when evaluating these defenses, so we can be confident that they truly protect against malicious use and don't give a false sense of security.

Abstract

How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a static set of harmful attack strings, or against computationally weak optimization methods that were not designed with the defense in mind. We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design while spending considerable resources to optimize their objective. By systematically tuning and scaling general optimization techniques-gradient descent, reinforcement learning, random search, and human-guided exploration-we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most; importantly, the majority of defenses originally reported near-zero attack success rates. We believe that future defense work must consider stronger attacks, such as the ones we describe, in order to make reliable and convincing claims of robustness.