DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?
Liqiang Jing, Zhehui Huang, Xiaoyang Wang, Wenlin Yao, Wenhao Yu, Kaixin Ma, Hongming Zhang, Xinya Du, Dong Yu
2024-09-13

Summary
This paper discusses DSBench, a new benchmark created to evaluate how well data science agents perform realistic tasks compared to human experts.
What's the problem?
Current benchmarks for data science agents are too simple and do not reflect the complexities of real-world data science tasks. This makes it hard to understand how capable these agents really are.
What's the solution?
The authors developed DSBench, which includes 466 data analysis tasks and 74 data modeling tasks taken from competitions like Eloquence and Kaggle. This benchmark tests agents on realistic scenarios, such as working with large datasets and multi-table structures, and introduces a new metric called Relative Performance Gap (RPG) to measure their performance accurately.
Why it matters?
By identifying the limitations of current data science agents, this research helps guide future improvements in AI technology. Enhancing these agents could lead to better tools for industries that rely on data analysis, such as finance and healthcare.
Abstract
Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive language/vision reasoning abilities, igniting the recent trend of building agents for targeted applications such as shopping assistants or AI software engineers. Recently, many data science benchmarks have been proposed to investigate their performance in the data science domain. However, existing data science benchmarks still fall short when compared to real-world data science applications due to their simplified settings. To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science agents.