LoongRL:Reinforcement Learning for Advanced Reasoning over Long Contexts
Siyuan Wang, Gaokai Zhang, Li Lyna Zhang, Ning Shang, Fan Yang, Dongyao Chen, Mao Yang
2025-10-23
Summary
This paper introduces a new method called LoongRL to help large language models get better at reasoning through very long pieces of text, something they often struggle with.
What's the problem?
Large language models are good at answering questions when the information is short and straightforward, and reinforcement learning can help them with this. However, they have trouble when the information needed to answer a question is spread out over a very long document, requiring them to follow a complex chain of thought. It's also hard to find good examples to train these models on for these long-context reasoning tasks.
What's the solution?
The researchers created a system called LoongRL that uses a technique called KeyChain. KeyChain takes simple question-answering problems and makes them much harder by hiding the actual question within a large collection of irrelevant documents, connected by unique identifiers. The model has to carefully follow these identifiers, find the real question, gather the right information, and then reason through it to get the answer. By training the model on these difficult tasks, it learns a pattern of planning, finding information, reasoning, and checking its work, which then works well even on much longer texts than it was trained on.
Why it matters?
This work is important because it significantly improves the ability of language models to handle long documents, allowing them to perform complex reasoning tasks that were previously out of reach. The LoongRL-14B model achieved performance comparable to much larger and more complex models, and it also improved its ability to find specific information within long texts and maintain its reasoning skills on shorter texts.
Abstract
Reasoning over long contexts is essential for large language models. While reinforcement learning (RL) enhances short-context reasoning by inducing "Aha" moments in chain-of-thought, the advanced thinking patterns required for long-context reasoning remain largely unexplored, and high-difficulty RL data are scarce. In this paper, we introduce LoongRL, a data-driven RL method for advanced long-context reasoning. Central to LoongRL is KeyChain, a synthesis approach that transforms short multi-hop QA into high-difficulty long-context tasks by inserting UUID chains that hide the true question among large collections of distracting documents. Solving these tasks requires the model to trace the correct chain step-by-step, identify the true question, retrieve relevant facts and reason over them to answer correctly. RL training on KeyChain data induces an emergent plan-retrieve-reason-recheck reasoning pattern that generalizes far beyond training length. Models trained at 16K effectively solve 128K tasks without prohibitive full-length RL rollout costs. On Qwen2.5-7B and 14B, LoongRL substantially improves long-context multi-hop QA accuracy by +23.5% and +21.1% absolute gains. The resulting LoongRL-14B reaches a score of 74.2, rivaling much larger frontier models such as o3-mini (74.5) and DeepSeek-R1 (74.9). It also improves long-context retrieval, passes all 128K needle-in-a-haystack stress tests, and preserves short-context reasoning capabilities.