< Explain other AI papers

Chain of Mindset: Reasoning with Adaptive Cognitive Modes

Tianyi Jiang, Arctanx An, Hengyi Feng, Naixin Zhai, Haodong Li, Xiaomin Yu, Jiahui Liu, Hanwen Du, Shuo Zhang, Zhi Yang, Jie Huang, Yuhua Li, Yongxin Ni, Huacan Wang, Ronghao Chen

2026-02-11

Chain of Mindset: Reasoning with Adaptive Cognitive Modes

Summary

This paper introduces a new way to improve how large language models (LLMs) solve complex problems by allowing them to switch between different thinking styles, or 'mindsets', during the problem-solving process.

What's the problem?

Current LLMs tend to approach every step of a problem with the same fixed way of thinking. This is a limitation because different parts of a problem actually require different approaches – sometimes you need to think creatively, sometimes logically, and sometimes spatially. By sticking to one mindset, these models miss out on opportunities to solve problems more effectively and reach a higher level of intelligence.

What's the solution?

The researchers developed a framework called 'Chain of Mindset' (CoM). This system breaks down reasoning into four distinct mindsets: thinking about space, narrowing down options, brainstorming many possibilities, and using algorithms. A 'Meta-Agent' acts like a director, choosing the best mindset for each step of the problem, and a 'Context Gate' helps manage information flow between these mindsets to keep things efficient. Importantly, this doesn't require retraining the LLM, it's a system built *around* existing models.

Why it matters?

This work is significant because it shows a way to make LLMs more flexible and capable problem-solvers. By allowing models to adapt their thinking style, CoM achieves better results on a variety of challenging tasks, including math, coding, science questions, and spatial reasoning, and does so without needing massive amounts of new training data. This brings us closer to AI that can truly reason and solve problems like humans do.

Abstract

Human problem-solving is never the repetition of a single mindset, by which we mean a distinct mode of cognitive processing. When tackling a specific task, we do not rely on a single mindset; instead, we integrate multiple mindsets within the single solution process. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different mindsets. This single-minded assumption prevents models from reaching the next level of intelligence. To address this limitation, we propose Chain of Mindset (CoM), a training-free agentic framework that enables step-level adaptive mindset orchestration. CoM decomposes reasoning into four functionally heterogeneous mindsets: Spatial, Convergent, Divergent, and Algorithmic. A Meta-Agent dynamically selects the optimal mindset based on the evolving reasoning state, while a bidirectional Context Gate filters cross-module information flow to maintain effectiveness and efficiency. Experiments across six challenging benchmarks spanning mathematics, code generation, scientific QA, and spatial reasoning demonstrate that CoM achieves state-of-the-art performance, outperforming the strongest baseline by 4.96\% and 4.72\% in overall accuracy on Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, while balancing reasoning efficiency. Our code is publicly available at https://github.com/QuantaAlpha/chain-of-mindset{https://github.com/QuantaAlpha/chain-of-mindset}.