Imperceptible Jailbreaking against Large Language Models
Kuofeng Gao, Yiming Li, Chao Du, Xin Wang, Xingjun Ma, Shu-Tao Xia, Tianyu Pang
2025-10-07

Summary
This paper explores a new way to trick large language models (LLMs) into giving harmful responses, focusing on making these tricks invisible to the user.
What's the problem?
Usually, when people try to 'jailbreak' LLMs – meaning get them to bypass their safety rules – they have to make obvious changes to the question, like adding strange words or phrases. For vision-based models, these changes can be very subtle. However, with text-based models, it's generally assumed you need to make visible changes to the prompt to cause a problem. This research shows that you *can* trick text-based LLMs without anyone noticing the alteration.
What's the solution?
The researchers discovered that certain hidden Unicode characters, called variation selectors, can change how the LLM 'reads' a question without changing how it *looks* on the screen. They developed a method to automatically find the right combination of these hidden characters to add to malicious questions. This method essentially searches for the best 'secret' additions that will make the LLM respond in a harmful way. They tested this on four different LLMs and found it worked consistently, even with more complex attacks like prompt injection.
Why it matters?
This is important because it shows that LLMs are vulnerable to attacks that are very difficult to detect. If someone can subtly alter a prompt with these hidden characters, they could potentially bypass safety measures and get the LLM to generate dangerous or inappropriate content without anyone realizing what's happening. This highlights the need for better defenses against these kinds of invisible attacks.
Abstract
Jailbreaking attacks on the vision modality typically rely on imperceptible adversarial perturbations, whereas attacks on the textual modality are generally assumed to require visible modifications (e.g., non-semantic suffixes). In this paper, we introduce imperceptible jailbreaks that exploit a class of Unicode characters called variation selectors. By appending invisible variation selectors to malicious questions, the jailbreak prompts appear visually identical to original malicious questions on screen, while their tokenization is "secretly" altered. We propose a chain-of-search pipeline to generate such adversarial suffixes to induce harmful responses. Our experiments show that our imperceptible jailbreaks achieve high attack success rates against four aligned LLMs and generalize to prompt injection attacks, all without producing any visible modifications in the written prompt. Our code is available at https://github.com/sail-sg/imperceptible-jailbreaks.