Confidence Regulation Neurons in Language Models
Alessandro Stolfo, Ben Wu, Wes Gurnee, Yonatan Belinkov, Xingyi Song, Mrinmaya Sachan, Neel Nanda
2024-06-25

Summary
This paper explores how large language models (LLMs) manage uncertainty in their predictions using special types of neurons called entropy neurons and token frequency neurons. These components help the models decide how confident they should be about their outputs.
What's the problem?
Although LLMs are widely used, we don't fully understand how they represent and control uncertainty when predicting the next word in a sentence. This lack of understanding can lead to issues with the reliability of their outputs, making it hard to trust what the model generates.
What's the solution?
The researchers identified two key types of neurons that play a role in regulating confidence: entropy neurons and token frequency neurons. Entropy neurons help adjust the overall confidence level of the model's predictions by influencing how the model processes information, while token frequency neurons adjust the importance of words based on how often they appear in the training data. They found that these neurons work across various models, even those with billions of parameters, and can effectively manage how confident the model is when generating text.
Why it matters?
This research is important because it sheds light on the inner workings of LLMs, particularly how they handle uncertainty. By understanding these mechanisms better, developers can improve the reliability and accuracy of language models, which is crucial for applications like chatbots, translation services, and any AI that generates text.
Abstract
Despite their widespread use, the mechanisms by which large language models (LLMs) represent and regulate uncertainty in next-token predictions remain largely unexplored. This study investigates two critical components believed to influence this uncertainty: the recently discovered entropy neurons and a new set of components that we term token frequency neurons. Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits. Our work shows that entropy neurons operate by writing onto an unembedding null space, allowing them to impact the residual stream norm with minimal direct effect on the logits themselves. We observe the presence of entropy neurons across a range of models, up to 7 billion parameters. On the other hand, token frequency neurons, which we discover and describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution. Finally, we present a detailed case study where entropy neurons actively manage confidence in the setting of induction, i.e. detecting and continuing repeated subsequences.