HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows
Wenlin Yao, Haitao Mi, Dong Yu
2024-09-30

Summary
This paper introduces HDFlow, a new framework designed to improve how Large Language Models (LLMs) solve complex problems by combining different thinking styles and flexible workflows.
What's the problem?
Even though LLMs have made great progress, they still struggle with complex reasoning tasks that require multiple steps and different skills. Traditional methods often don't allow these models to adapt well to the challenges of such problems.
What's the solution?
HDFlow tackles this issue by integrating two main ideas: Dynamic Workflows, which break down complicated problems into smaller, manageable tasks and organize how to solve them, and Hybrid Thinking, which combines fast (intuitive) and slow (deliberate) reasoning based on the complexity of the problem. This allows the model to switch strategies as needed. The authors also created a large dataset of challenging reasoning problems to help train LLMs using this new method.
Why it matters?
This research is significant because it enhances the ability of LLMs to handle complex reasoning tasks more effectively. By improving how these models think and solve problems, HDFlow could lead to better performance in various applications, such as natural language understanding, decision-making, and problem-solving in real-world scenarios.
Abstract
Despite recent advancements in large language models (LLMs), their performance on complex reasoning problems requiring multi-step thinking and combining various skills is still limited. To address this, we propose a novel framework HDFlow for complex reasoning with LLMs that combines fast and slow thinking modes in an adaptive manner. Our approach consists of two key components: 1) a new approach for slow, deliberate reasoning called Dynamic Workflow, which automatically decomposes complex problems into more manageable sub-tasks and dynamically designs a workflow to assemble specialized LLM or symbolic reasoning tools to solve sub-tasks; 2) Hybrid Thinking, a general framework that dynamically combines fast and slow thinking based on problem complexity. Finally, we propose an easy-to-scale method for automatically synthesizing a large-scale dataset of 27K challenging reasoning problems for complex reasoning and a hybrid thinking tuning method that trains smaller LLMs on this dataset to internalize the fast/slow hybrid reasoning strategies. Experiments on four reasoning benchmark datasets demonstrate that our slow thinking with dynamic workflows significantly outperforms Chain-of-Thought, and hybrid thinking achieves the highest accuracy while providing an effective balance between computational efficiency and performance. Fine-tuning using our hybrid thinking approach also significantly boosts the complex reasoning capabilities of open-source language models. The results showcase the promise of slow thinking, dynamic workflows, and hybrid thinking in expanding the frontier of complex problem-solving with LLMsCode and data will be released at \url{https://github.com/wenlinyao/HDFlow.}.