< Explain other AI papers

FinSearchComp: Towards a Realistic, Expert-Level Evaluation of Financial Search and Reasoning

Liang Hu, Jianpeng Jiao, Jiashuo Liu, Yanle Ren, Zhoufutu Wen, Kaiyuan Zhang, Xuanliang Zhang, Xiang Gao, Tianci He, Fei Hu, Yali Liao, Zaiyuan Wang, Chenghao Yang, Qianyu Yang, Mingren Yin, Zhiyuan Zeng, Ge Zhang, Xinyi Zhang, Xiying Zhao, Zhenwei Zhu, Hongseok Namkoong, Wenhao Huang

2025-09-19

FinSearchComp: Towards a Realistic, Expert-Level Evaluation of Financial Search and Reasoning

Summary

This paper introduces a new benchmark called FinSearchComp designed to test how well AI agents can perform realistic financial research tasks, like a financial analyst would. It's a way to measure if these agents are actually getting better at understanding information and using it to solve complex problems.

What's the problem?

Currently, there aren't good, publicly available tests to see how well AI agents can handle the specific challenges of financial data searching. Financial research is complicated because it requires up-to-date information, specialized knowledge, and often involves multiple steps to find answers. Creating a test that accurately reflects this is difficult and requires expertise.

What's the solution?

The researchers created FinSearchComp, which includes 635 questions based on real financial analyst work. These questions fall into three categories: finding current data, looking up historical facts, and doing in-depth investigations of past events. They had 70 financial professionals help create and check the questions to make sure they were realistic and accurate. They then tested 21 different AI models on this benchmark.

Why it matters?

This benchmark is important because it provides a challenging and realistic way to evaluate AI agents' ability to handle complex financial tasks. The results show that giving agents access to web search and financial tools improves their performance, and that the origin of the AI model can also affect how well it does. Ultimately, FinSearchComp helps push the development of AI that can truly assist in financial analysis and potentially contribute to more general intelligence.

Abstract

Search has emerged as core infrastructure for LLM-based agents and is widely viewed as critical on the path toward more general intelligence. Finance is a particularly demanding proving ground: analysts routinely conduct complex, multi-step searches over time-sensitive, domain-specific data, making it ideal for assessing both search proficiency and knowledge-grounded reasoning. Yet no existing open financial datasets evaluate data searching capability of end-to-end agents, largely because constructing realistic, complicated tasks requires deep financial expertise and time-sensitive data is hard to evaluate. We present FinSearchComp, the first fully open-source agent benchmark for realistic, open-domain financial search and reasoning. FinSearchComp comprises three tasks -- Time-Sensitive Data Fetching, Simple Historical Lookup, and Complex Historical Investigation -- closely reproduce real-world financial analyst workflows. To ensure difficulty and reliability, we engage 70 professional financial experts for annotation and implement a rigorous multi-stage quality-assurance pipeline. The benchmark includes 635 questions spanning global and Greater China markets, and we evaluate 21 models (products) on it. Grok 4 (web) tops the global subset, approaching expert-level accuracy. DouBao (web) leads on the Greater China subset. Experimental analyses show that equipping agents with web search and financial plugins substantially improves results on FinSearchComp, and the country origin of models and tools impact performance significantly.By aligning with realistic analyst tasks and providing end-to-end evaluation, FinSearchComp offers a professional, high-difficulty testbed for complex financial search and reasoning.