LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations
Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Reichart, Idan Szpektor, Hadas Kotek, Yonatan Belinkov
2024-10-08

Summary
This paper discusses how large language models (LLMs) can sometimes produce incorrect answers, known as 'hallucinations,' and reveals that these models actually contain more information about the truthfulness of their responses than previously thought.
What's the problem?
LLMs often generate errors, such as providing incorrect facts or making reasoning mistakes. These errors can be frustrating and misleading, especially when users rely on these models for accurate information. Understanding why these errors happen is crucial for improving the reliability of LLMs.
What's the solution?
The authors found that LLMs have internal representations that hold valuable information about the truthfulness of their outputs. They discovered that specific tokens in the model's internal state are key to identifying correct answers, even when the model generates incorrect responses. They also introduced methods to predict the types of errors the model might make, allowing for better strategies to reduce these mistakes. This means that while a model might seem to get something wrong externally, it may actually know the correct answer internally.
Why it matters?
This research is important because it provides insights into how LLMs process information and make decisions. By understanding the internal workings of these models, developers can create better error detection and correction methods. This could lead to more reliable AI systems that provide accurate information, making them more useful in real-world applications like education, customer service, and research.
Abstract
Large language models (LLMs) often produce errors, including factual inaccuracies, biases, and reasoning failures, collectively referred to as "hallucinations". Recent studies have demonstrated that LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors. In this work, we show that the internal representations of LLMs encode much more information about truthfulness than previously recognized. We first discover that the truthfulness information is concentrated in specific tokens, and leveraging this property significantly enhances error detection performance. Yet, we show that such error detectors fail to generalize across datasets, implying that -- contrary to prior claims -- truthfulness encoding is not universal but rather multifaceted. Next, we show that internal representations can also be used for predicting the types of errors the model is likely to make, facilitating the development of tailored mitigation strategies. Lastly, we reveal a discrepancy between LLMs' internal encoding and external behavior: they may encode the correct answer, yet consistently generate an incorrect one. Taken together, these insights deepen our understanding of LLM errors from the model's internal perspective, which can guide future research on enhancing error analysis and mitigation.