Jailbreaking in the Haystack
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena, Ziqian Zhong, Alexander Robey, Aditi Raghunathan
2025-11-10
Summary
This paper investigates a security weakness in large language models (LLMs) that can now handle very long inputs, like entire books. It shows how these models can be tricked into generating harmful content, even when they're designed to be safe.
What's the problem?
As language models get better at processing huge amounts of text, a new problem arises: they become vulnerable to 'jailbreaking'. This means someone can sneak in a harmful request hidden within a lot of harmless text, and the model will unknowingly fulfill it. Previous methods to test this were often complex or easily detected. The core issue is that where you *place* a harmful request within a long input significantly impacts whether the model will respond to it.
What's the solution?
The researchers developed a technique called NINJA, which stands for 'Needle-in-haystack jailbreak attack'. NINJA works by adding a lot of safe, model-generated text *around* a harmful instruction. The key is carefully positioning the harmful instruction within this long context. They tested NINJA on several popular models like LLaMA, Qwen, Mistral, and Gemini, and found it was very effective at getting them to produce unsafe outputs. They also discovered that making the input longer is more effective than simply trying the attack multiple times.
Why it matters?
This research is important because it reveals a fundamental flaw in how these powerful language models handle long inputs. Even if a model is generally safe, a cleverly crafted, lengthy prompt can bypass its safety measures. This means developers need to focus on making models more robust to these kinds of attacks, especially as they become more widely used in applications like AI assistants and automated systems.
Abstract
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our method is the observation that the position of harmful goals play an important role in safety. Experiments on standard safety benchmark, HarmBench, show that NINJA significantly increases attack success rates across state-of-the-art open and proprietary models, including LLaMA, Qwen, Mistral, and Gemini. Unlike prior jailbreaking methods, our approach is low-resource, transferable, and less detectable. Moreover, we show that NINJA is compute-optimal -- under a fixed compute budget, increasing context length can outperform increasing the number of trials in best-of-N jailbreak. These findings reveal that even benign long contexts -- when crafted with careful goal positioning -- introduce fundamental vulnerabilities in modern LMs.