Top-$nσ$: Not All Logits Are You Need
Chenxia Tang, Jianchun Liu, Hongli Xu, Liusheng Huang
2024-11-19

Summary
This paper introduces top-nsigma, a new method for sampling in large language models that improves the quality of generated text by filtering out unnecessary noise from the model's predictions.
What's the problem?
Large language models (LLMs) often use methods like greedy decoding or low-temperature sampling to generate text, but these approaches can struggle with balancing diversity and accuracy. As a result, they may produce less coherent or relevant outputs when generating text, especially when trying to consider many possible words at once.
What's the solution?
The authors propose top-nsigma, which operates on the logits (the raw scores for each possible next word) before they are turned into probabilities. This method identifies which logits are useful and which are just noise by separating them into two groups: informative and noisy. By using a statistical threshold, top-nsigma filters out the noise and maintains a stable sampling process regardless of how much variation is allowed (temperature). The results show that top-nsigma not only performs better than existing methods but also outperforms traditional greedy decoding.
Why it matters?
This research is important because it provides a new way to improve how language models generate text. By effectively filtering out irrelevant information, top-nsigma can lead to more accurate and coherent outputs, enhancing the overall performance of AI systems that rely on natural language processing.
Abstract
Large language models (LLMs) typically employ greedy decoding or low-temperature sampling for reasoning tasks, reflecting a perceived trade-off between diversity and accuracy. We challenge this convention by introducing top-nsigma, a novel sampling method that operates directly on pre-softmax logits by leveraging a statistical threshold. Our key insight is that logits naturally separate into a Gaussian-distributed noisy region and a distinct informative region, enabling efficient token filtering without complex probability manipulations. Unlike existing methods (e.g., top-p, min-p) that inadvertently include more noise tokens at higher temperatures, top-nsigma maintains a stable sampling space regardless of temperature scaling. We also provide a theoretical analysis of top-nsigma to better understand its behavior. The extensive experimental results across four reasoning-focused datasets demonstrate that our method not only outperforms existing sampling approaches but also surpasses greedy decoding, while maintaining consistent performance even at high temperatures.