PokeeResearch: Effective Deep Research via Reinforcement Learning from AI Feedback and Robust Reasoning Scaffold
Yi Wan, Jiuqi Wang, Liam Li, Jinsong Liu, Ruihao Zhu, Zheqing Zhu
2025-10-22
Summary
This paper introduces PokeeResearch-7B, a new AI system designed to act as a research assistant. It's a 'deep research agent' meaning it can handle complicated questions, find information online, and write up answers based on what it finds.
What's the problem?
Current AI research assistants aren't very good at a few key things. They often grab irrelevant information when searching, don't always stick to the facts, and can easily get confused or 'break' when using different tools to help them. Basically, they're not reliable enough for serious research tasks.
What's the solution?
The researchers created PokeeResearch-7B and trained it using a special method called reinforcement learning, where the AI learns by getting feedback from another AI. This feedback focuses on making sure the answers are accurate, properly cite sources, and follow instructions well. They also built in a way for the AI to double-check its own work and recover if a tool it's using fails, making it more robust.
Why it matters?
PokeeResearch-7B performs better than other similar AI systems of its size on a variety of research tasks. This shows that by carefully training AI with the right feedback and giving it the ability to think through problems step-by-step, we can create AI assistants that are actually helpful and trustworthy for research.
Abstract
Tool-augmented large language models (LLMs) are emerging as deep research agents, systems that decompose complex queries, retrieve external evidence, and synthesize grounded responses. Yet current agents remain limited by shallow retrieval, weak alignment metrics, and brittle tool-use behavior. We introduce PokeeResearch-7B, a 7B-parameter deep research agent built under a unified reinforcement learning framework for robustness, alignment, and scalability. PokeeResearch-7B is trained by an annotation-free Reinforcement Learning from AI Feedback (RLAIF) framework to optimize policies using LLM-based reward signals that capture factual accuracy, citation faithfulness, and instruction adherence. A chain-of-thought-driven multi-call reasoning scaffold further enhances robustness through self-verification and adaptive recovery from tool failures. Among 10 popular deep research benchmarks, PokeeResearch-7B achieves state-of-the-art performance among 7B-scale deep research agents. This highlights that careful reinforcement learning and reasoning design can produce efficient, resilient, and research-grade AI agents. The model and inference code is open-sourced under MIT license at https://github.com/Pokee-AI/PokeeResearchOSS.