Reflection-Bench: probing AI intelligence with reflection
Lingyu Li, Yixu Wang, Haiquan Zhao, Shuqi Kong, Yan Teng, Chunbo Li, Yingchun Wang
2024-10-28

Summary
This paper introduces Reflection-Bench, a new benchmark designed to evaluate how well large language models (LLMs) can reflect on their actions and adapt their responses based on that reflection.
What's the problem?
Reflection is an important aspect of intelligence, allowing both humans and AI to learn from their experiences and adjust their behavior accordingly. However, current LLMs often struggle with this ability, leading to subpar performance when faced with unexpected outcomes or complex tasks. There is a need for a way to assess and improve the reflective capabilities of these models.
What's the solution?
The authors created Reflection-Bench, which consists of seven tasks that test various cognitive functions related to reflection, such as memory, decision-making, and belief updating. They evaluated 13 different LLMs, including popular models like GPT-4, to see how well they performed on these tasks. The results showed that most LLMs still have significant limitations in their ability to reflect on their outputs and adapt their reasoning. The authors also discussed potential reasons for these shortcomings and suggested areas for future research.
Why it matters?
This research is important because it highlights the need for better reflective capabilities in AI systems. By developing tools like Reflection-Bench, researchers can better understand how LLMs think and learn, ultimately leading to more intelligent and reliable AI that can effectively interact with the world.
Abstract
The ability to adapt beliefs or behaviors in response to unexpected outcomes, reflection, is fundamental to intelligent systems' interaction with the world. From a cognitive science perspective, this serves as a core principle of intelligence applicable to both human and AI systems. To address the debate on the intelligence of large language models (LLMs), we propose Reflection-Bench, a comprehensive benchmark comprising 7 tasks spanning core cognitive functions crucial for reflection, including perception, memory, belief updating, decision-making, prediction, counterfactual thinking, and meta-reflection. We evaluate the performances of 13 prominent LLMs such as OpenAI o1, GPT-4, Claude 3.5 Sonnet, etc. The results indicate that current LLMs still lack satisfactory reflection ability. We discuss the underlying causes of these results and suggest potential avenues for future research. In conclusion, Reflection-Bench offers both evaluation tools and inspiration for developing AI capable of reliably interacting with the environment. Our data and code are available at https://github.com/YabYum/ReflectionBench.