AutoResearchBench: Benchmarking AI Agents on Complex Scientific Literature Discovery
Lei Xiong, Kun Luo, Ziyi Xia, Wenbo Zhang, Jin-Ge Yao, Zheng Liu, Jingying Shao, Jianlyu Chen, Hongjin Qian, Xi Yang, Qian Yu, Hao Li, Chen Yue, Xiaan Du, Yuyang Wang, Yesheng Liu, Haiyu Xu, Zhicheng Dou
2026-04-29
Summary
This paper introduces a new way to test how well AI can do scientific research, specifically finding and understanding relevant research papers.
What's the problem?
Currently, AI is getting better at using the internet, but it's still really hard for it to perform the complex tasks needed for actual scientific research, like deeply understanding concepts and finding specific papers based on detailed criteria. Existing tests don't accurately measure these skills because they're too simple or focus on general web browsing instead of research.
What's the solution?
The researchers created a benchmark called AutoResearchBench. It has two types of challenges: 'Deep Research' where the AI has to find a specific paper by following clues, and 'Wide Research' where it needs to find *all* papers that fit certain requirements. This benchmark is designed to be much harder than previous ones, requiring real understanding of science and careful searching.
Why it matters?
This work is important because it provides a realistic and challenging test for AI's ability to autonomously conduct scientific research. The results show that even the most advanced AI models struggle with these tasks, highlighting areas where further improvement is needed to truly automate the research process and accelerate scientific discovery.
Abstract
Autonomous scientific research is significantly advanced thanks to the development of AI agents. One key step in this process is finding the right scientific literature, whether to explore existing knowledge for a research problem, or to acquire evidence for verifying assumptions and supporting claims. To assess AI agents' capability in driving this process, we present AutoResearchBench, a dedicated benchmark for autonomous scientific literature discovery. AutoResearchBench consists of two complementary task types: (1) Deep Research, which requires tracking down a specific target paper through a progressive, multi-step probing process, and (2) Wide Research, which requires comprehensively collecting a set of papers satisfying given conditions. Compared to previous benchmarks on agentic web browsing, AutoResearchBench is distinguished along three dimensions: it is research-oriented, calling for in-depth comprehension of scientific concepts; literature-focused, demanding fine-grained utilization of detailed information; and open-ended, involving an unknown number of qualified papers and thus requiring deliberate reasoning and search throughout. These properties make AutoResearchBench uniquely suited for evaluating autonomous research capabilities, and extraordinarily challenging. Even the most powerful LLMs, despite having largely conquered general agentic web-browsing benchmarks such as BrowseComp, achieve only 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research, while many other strong baselines fall below 5%. We publicly release the dataset and evaluation pipeline to facilitate future research in this direction. We publicly release the dataset, evaluation pipeline, and code at https://github.com/CherYou/AutoResearchBench.