Bag of Tricks for Subverting Reasoning-based Safety Guardrails
Shuo Chen, Zhen Han, Haokun Chen, Bailan He, Shengyun Si, Jingpei Wu, Philip Torr, Volker Tresp, Jindong Gu
2025-10-15
Summary
This paper investigates the safety of new methods designed to prevent large language models (LRMs) from being tricked into generating harmful content, specifically focusing on 'reasoning-based guardrails'. These guardrails use the model's own reasoning abilities to identify and block dangerous requests.
What's the problem?
While reasoning-based guardrails seemed very effective at stopping 'jailbreak' attacks – attempts to bypass safety measures – the researchers discovered they are surprisingly easy to fool. By adding just a few carefully chosen words or phrases to a prompt, attackers can manipulate the guardrails and get the model to produce harmful responses, even more dangerous than if the guardrails weren't there at all. These vulnerabilities exist across different models and even work on both locally run models and those accessed through online services.
What's the solution?
The researchers developed a collection of simple 'jailbreak' techniques, called a 'bag of tricks', that exploit this weakness. These techniques range from simply adding template phrases to more complex, automated methods that find the best way to bypass the guardrails. They tested these methods on several popular open-source LRMs and consistently achieved high success rates, often exceeding 90% on standard tests.
Why it matters?
This research is important because it shows that current safety measures for LRMs aren't as strong as they appear. The ease with which these guardrails can be bypassed highlights a critical need for better alignment techniques to prevent malicious use of these powerful models. If these vulnerabilities aren't addressed, open-source LRMs could be easily exploited to generate harmful content.
Abstract
Recent reasoning-based safety guardrails for Large Reasoning Models (LRMs), such as deliberative alignment, have shown strong defense against jailbreak attacks. By leveraging LRMs' reasoning ability, these guardrails help the models to assess the safety of user inputs before generating final responses. The powerful reasoning ability can analyze the intention of the input query and will refuse to assist once it detects the harmful intent hidden by the jailbreak methods. Such guardrails have shown a significant boost in defense, such as the near-perfect refusal rates on the open-source gpt-oss series. Unfortunately, we find that these powerful reasoning-based guardrails can be extremely vulnerable to subtle manipulation of the input prompts, and once hijacked, can lead to even more harmful results. Specifically, we first uncover a surprisingly fragile aspect of these guardrails: simply adding a few template tokens to the input prompt can successfully bypass the seemingly powerful guardrails and lead to explicit and harmful responses. To explore further, we introduce a bag of jailbreak methods that subvert the reasoning-based guardrails. Our attacks span white-, gray-, and black-box settings and range from effortless template manipulations to fully automated optimization. Along with the potential for scalable implementation, these methods also achieve alarmingly high attack success rates (e.g., exceeding 90% across 5 different benchmarks on gpt-oss series on both local host models and online API services). Evaluations across various leading open-source LRMs confirm that these vulnerabilities are systemic, underscoring the urgent need for stronger alignment techniques for open-sourced LRMs to prevent malicious misuse. Code is open-sourced at https://chenxshuo.github.io/bag-of-tricks.