Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
Fengli Xu, Qianyue Hao, Zefang Zong, Jingwei Wang, Yunke Zhang, Jingyi Wang, Xiaochong Lan, Jiahui Gong, Tianjian Ouyang, Fanjin Meng, Chenyang Shao, Yuwei Yan, Qinglong Yang, Yiwen Song, Sijian Ren, Xinyuan Hu, Yu Li, Jie Feng, Chen Gao, Yong Li
2025-01-17

Summary
This paper talks about how researchers are trying to make large language models (LLMs) think more like humans. It's like teaching a super-smart computer to solve complex problems by breaking them down into steps, just like we do when we're figuring something out.
What's the problem?
Regular LLMs are great at generating text, but they struggle with complex reasoning tasks. It's like having a friend who's really good at talking but has trouble solving puzzles or explaining their thought process. Researchers want to make these AI models better at tackling tricky problems and explaining how they got to their answers.
What's the solution?
The researchers are using a few clever tricks to make LLMs better at reasoning. First, they're teaching the AI to think in steps, kind of like showing your work in a math problem. They're also using something called reinforcement learning, which is like letting the AI practice solving problems over and over, learning from its mistakes. Additionally, they're giving the AI more time to 'think' during tests, which helps it come up with better answers.
Why it matters?
This matters because if we can make AI think more like humans, it could help solve all sorts of complex problems in the real world. Imagine having an AI assistant that could help with everything from scientific research to creative problem-solving in business. It's a big step towards creating AI that can truly understand and interact with the world in a human-like way, which could lead to amazing breakthroughs in many fields.
Abstract
Language has long been conceived as an essential tool for human reasoning. The breakthrough of Large Language Models (LLMs) has sparked significant research interest in leveraging these models to tackle complex reasoning tasks. Researchers have moved beyond simple autoregressive token generation by introducing the concept of "thought" -- a sequence of tokens representing intermediate steps in the reasoning process. This innovative paradigm enables LLMs' to mimic complex human reasoning processes, such as tree search and reflective thinking. Recently, an emerging trend of learning to reason has applied reinforcement learning (RL) to train LLMs to master reasoning processes. This approach enables the automatic generation of high-quality reasoning trajectories through trial-and-error search algorithms, significantly expanding LLMs' reasoning capacity by providing substantially more training data. Furthermore, recent studies demonstrate that encouraging LLMs to "think" with more tokens during test-time inference can further significantly boost reasoning accuracy. Therefore, the train-time and test-time scaling combined to show a new research frontier -- a path toward Large Reasoning Model. The introduction of OpenAI's o1 series marks a significant milestone in this research direction. In this survey, we present a comprehensive review of recent progress in LLM reasoning. We begin by introducing the foundational background of LLMs and then explore the key technical components driving the development of large reasoning models, with a focus on automated data construction, learning-to-reason techniques, and test-time scaling. We also analyze popular open-source projects at building large reasoning models, and conclude with open challenges and future research directions.