Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity
Jiayi Zhang, Simon Yu, Derek Chong, Anthony Sicilia, Michael R. Tomz, Christopher D. Manning, Weiyan Shi
2025-10-15
Summary
This paper investigates why large language models (LLMs) sometimes start producing very similar, uncreative responses after being fine-tuned to follow human preferences, a problem called 'mode collapse'. It argues this isn't just a flaw in the training process, but stems from how humans naturally rate different options.
What's the problem?
When LLMs are trained to generate text that humans prefer, they tend to lose their ability to produce diverse and creative outputs. The researchers found that people consistently favor responses that are more typical or familiar, even if other responses are equally good or even more interesting. This preference for the 'normal' leads the model to focus on generating only those typical responses, resulting in a lack of variety and the 'mode collapse' phenomenon.
What's the solution?
To fix this, the researchers developed a technique called 'Verbalized Sampling' (VS). Instead of directly asking the model to generate a response, VS prompts the model to first estimate the probability of *many* different possible responses, and then output those responses along with their estimated probabilities. This forces the model to consider a wider range of options and prevents it from getting stuck on just the most typical ones. It's a simple trick that doesn't require any further training of the model.
Why it matters?
This work is important because it shifts the focus from fixing the model itself to understanding the data used to train it. By recognizing that human preference data is inherently biased towards typicality, the researchers provide a new way to improve LLM diversity without needing complex changes to the training process. This means we can unlock more creativity and variety from existing LLMs, making them more useful for tasks like writing, storytelling, and generating new ideas.
Abstract
Post-training alignment often reduces LLM diversity, leading to a phenomenon known as mode collapse. Unlike prior work that attributes this effect to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text as a result of well-established findings in cognitive psychology. We formalize this bias theoretically, verify it on preference datasets empirically, and show that it plays a central role in mode collapse. Motivated by this analysis, we introduce Verbalized Sampling, a simple, training-free prompting strategy to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., ``Generate 5 jokes about coffee and their corresponding probabilities''). Comprehensive experiments show that VS significantly improves performance across creative writing (poems, stories, jokes), dialogue simulation, open-ended QA, and synthetic data generation, without sacrificing factual accuracy and safety. For instance, in creative writing, VS increases diversity by 1.6-2.1x over direct prompting. We further observe an emergent trend that more capable models benefit more from VS. In sum, our work provides a new data-centric perspective on mode collapse and a practical inference-time remedy that helps unlock pre-trained generative diversity.