< Explain other AI papers

PRInTS: Reward Modeling for Long-Horizon Information Seeking

Jaewoo Lee, Archiki Prasad, Justin Chih-Yao Chen, Zaid Khan, Elias Stengel-Eskin, Mohit Bansal

2025-11-25

PRInTS: Reward Modeling for Long-Horizon Information Seeking

Summary

This paper focuses on improving how AI agents find information by using tools and reasoning through multiple steps, a process that's currently difficult for AI systems based on language models.

What's the problem?

AI agents often need to gather information over a series of steps, like using different tools and interpreting their results. Existing methods for guiding these agents, called process reward models, aren't very good at handling these complex tasks. They struggle to understand the quality of each step, especially when it involves interacting with tools and dealing with a lot of information as the task goes on. They're designed for quick, simple judgments and can't keep track of everything happening in a long, involved process.

What's the solution?

The researchers developed a new process reward model called PRInTS. PRInTS works in two main ways: first, it gives a detailed score to each step, considering things like how well the agent understands the tool's output and how useful the tool was. Second, it summarizes the information from previous steps to keep the context manageable without losing important details. This allows the AI to make better decisions even when dealing with complex, multi-step tasks.

Why it matters?

This research is important because it significantly improves the ability of AI agents to find and use information effectively. PRInTS allows smaller, more accessible AI models to perform as well as or even better than larger, more complex models on challenging information-seeking tasks, making advanced AI capabilities more widely available.

Abstract

Information-seeking is a core capability for AI agents, requiring them to gather and reason over tool-generated information across long trajectories. However, such multi-step information-seeking tasks remain challenging for agents backed by language models. While process reward models (PRMs) can guide agents by ranking candidate steps at test-time, existing PRMs, designed for short reasoning with binary judgment, cannot capture richer dimensions of information-seeking steps, such as tool interactions and reasoning over tool outputs, nor handle the rapidly growing context in long-horizon tasks. To address these limitations, we introduce PRInTS, a generative PRM trained with dual capabilities: (1) dense scoring based on the PRM's reasoning across multiple step quality dimensions (e.g., interpretation of tool outputs, tool call informativeness) and (2) trajectory summarization that compresses the growing context while preserving essential information for step evaluation. Extensive evaluations across FRAMES, GAIA (levels 1-3), and WebWalkerQA (easy-hard) benchmarks on multiple models, along with ablations, reveal that best-of-n sampling with PRInTS enhances information-seeking abilities of open-source models as well as specialized agents, matching or surpassing the performance of frontier models with a much smaller backbone agent and outperforming other strong reward modeling baselines.