Chaining the Evidence: Robust Reinforcement Learning for Deep Search Agents with Citation-Aware Rubric Rewards
Jiajie Zhang, Xin Lv, Ling Feng, Lei Hou, Juanzi Li
2026-01-12
Summary
This paper focuses on improving how computer programs, specifically those using large language models (LLMs) to perform deep searches, learn to find and present information. It tackles the issue of these programs sometimes giving incomplete or even incorrect answers, and proposes a new way to train them to be more reliable.
What's the problem?
Current methods for teaching these search programs rely on simple 'right or wrong' rewards. This doesn't give the program enough information about *how* well it's doing, leading to problems like finding easy shortcuts instead of truly understanding the information, or even making things up (hallucinations). Essentially, a simple reward doesn't encourage thorough, fact-checked reasoning.
What's the solution?
The researchers developed a system called Citation-aware Rubric Rewards (CaRR). This system breaks down complex questions into smaller, verifiable steps, and rewards the program for correctly identifying key pieces of information, backing them up with proper citations, and building a clear chain of evidence that leads to the final answer. They also created a training method, Citation-aware Group Relative Policy Optimization (C-GRPO), that combines these detailed rewards with the traditional 'right or wrong' feedback to create a more robust learning process.
Why it matters?
This work is important because it makes deep search agents more trustworthy and useful. By encouraging comprehensive reasoning and factual accuracy, it reduces the chances of getting misleading or incorrect information. This is especially valuable for complex research tasks where reliable information is crucial, and it represents a step forward in building AI systems that can truly assist with knowledge discovery.
Abstract
Reinforcement learning (RL) has emerged as a critical technique for enhancing LLM-based deep search agents. However, existing approaches primarily rely on binary outcome rewards, which fail to capture the comprehensiveness and factuality of agents' reasoning process, and often lead to undesirable behaviors such as shortcut exploitation and hallucinations. To address these limitations, we propose Citation-aware Rubric Rewards (CaRR), a fine-grained reward framework for deep search agents that emphasizes reasoning comprehensiveness, factual grounding, and evidence connectivity. CaRR decomposes complex questions into verifiable single-hop rubrics and requires agents to satisfy these rubrics by explicitly identifying hidden entities, supporting them with correct citations, and constructing complete evidence chains that link to the predicted answer. We further introduce Citation-aware Group Relative Policy Optimization (C-GRPO), which combines CaRR and outcome rewards for training robust deep search agents. Experiments show that C-GRPO consistently outperforms standard outcome-based RL baselines across multiple deep search benchmarks. Our analysis also validates that C-GRPO effectively discourages shortcut exploitation, promotes comprehensive, evidence-grounded reasoning, and exhibits strong generalization to open-ended deep research tasks. Our code and data are available at https://github.com/THUDM/CaRR.