The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think
Seongyun Lee, Seungone Kim, Minju Seo, Yongrae Jo, Dongyoung Go, Hyeonbin Hwang, Jinho Park, Xiang Yue, Sean Welleck, Graham Neubig, Moontae Lee, Minjoon Seo
2025-05-16
Summary
This paper talks about the CoT Encyclopedia, a new system that helps researchers understand and organize the different ways AI models think through problems step by step, especially when they use long chains of reasoning.
What's the problem?
The problem is that large language models often solve problems by generating a series of thoughts or steps, but it's hard for people to know what strategies the AI is using or to predict how it will think in different situations, which makes it tricky to trust or improve the model.
What's the solution?
The researchers built a framework that automatically examines these chains of thought, sorts them into different reasoning styles, and makes it easier to see and control how the AI is thinking. This helps people analyze the model’s reasoning and find ways to make it work better.
Why it matters?
This matters because it allows us to make AI models more understandable, reliable, and effective, which is important for using them in areas like education, science, and any situation where clear and logical thinking is needed.
Abstract
The CoT Encyclopedia is a bottom-up framework that automatically analyzes and categorizes reasoning strategies in long chain-of-thoughts generated by large language models, providing more interpretable analyses and enabling performance improvements.