Benchmarking Mental State Representations in Language Models
Matteo Bortoletto, Constantin Ruhdorfer, Lei Shi, Andreas Bulling
2024-06-28

Summary
This paper talks about how researchers are studying how well language models (LMs) understand and represent mental states, like beliefs and thoughts, especially in tasks that require Theory of Mind reasoning. It focuses on creating a benchmark to evaluate these abilities across different types of LMs.
What's the problem?
While many studies have looked at how well LMs generate text based on understanding mental states, there hasn't been enough research on how these models internally represent those states. This lack of understanding makes it hard to know if and how different designs or training methods affect their ability to understand beliefs—both their own and those of others.
What's the solution?
To address this issue, the authors developed a comprehensive benchmark that tests various types of LMs with different sizes and training methods. They explored how well these models can represent mental states by examining their responses to different prompts. The results showed that larger models and those that were fine-tuned performed better in understanding the beliefs of others. Additionally, they found that changing the way prompts are presented can significantly impact the model's performance, even when those changes should help improve understanding.
Why it matters?
This research is important because it helps clarify how language models think about and represent mental states, which is crucial for tasks that require understanding complex human interactions. By improving our knowledge in this area, we can develop better AI systems that can interact more naturally with people, leading to advancements in fields like education, therapy, and customer service.
Abstract
While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.