Learning to Reason via Mixture-of-Thought for Logical Reasoning
Tong Zheng, Lichang Chen, Simeng Han, R. Thomas McCoy, Heng Huang
2025-05-22

Summary
This paper talks about a new way to help AI models get better at logical reasoning by letting them use different kinds of thinking, like natural language, computer code, and symbols, all at once.
What's the problem?
AI models often struggle with logical reasoning tasks because they usually rely on just one type of thinking, which limits their ability to solve more complex problems that need a mix of skills.
What's the solution?
The researchers developed the Mixture-of-Thought framework, which allows the AI to combine natural language, code, and symbolic logic when trying to solve logical problems, leading to higher accuracy than using just one approach.
Why it matters?
This matters because it helps create AI that can solve tougher, more varied problems, making them more useful for things like math, science, and programming.
Abstract
A Mixture-of-Thought framework enables LLMs to reason across natural language, code, and symbolic logic, improving accuracy on logical reasoning tasks compared to single-modality approaches.