T2R-bench: A Benchmark for Generating Article-Level Reports from Real World Industrial Tables
Jie Zhang, Changzai Pan, Kaiwen Wei, Sishi Xiong, Yu Zhao, Xiangyu Li, Jiaxin Peng, Xiaoyan Gu, Jian Yang, Wenhan Chang, Zhenhe Wu, Jiang Zhong, Shuangyong Song, Yongxiang Li, Xuelong Li
2025-09-02
Summary
This paper focuses on the difficulty large language models (LLMs) have with turning information from tables into written reports, something that's really important for real-world business uses.
What's the problem?
While LLMs are getting good at *looking* at tables, they struggle to actually understand the information and write a clear, accurate report based on it. This is because tables can be super complex and varied, and current tests used to measure how well LLMs do this aren't realistic enough to show how they'd perform in a practical setting.
What's the solution?
The researchers created a new task called 'table-to-report' and built a new set of tests, called T2R-bench, specifically designed to evaluate this skill. This test set includes over 450 real-world tables from 19 different industries, and they also came up with a way to fairly judge how good the reports generated by the LLMs are. They then tested 25 different LLMs on this new benchmark.
Why it matters?
The results showed that even the best LLMs still have a lot of room for improvement when it comes to turning tables into reports, scoring only around 62.71%. This highlights a key area where LLMs need to get better to be truly useful in many business applications, and provides a better way to measure progress in this area.
Abstract
Extensive research has been conducted to explore the capabilities of large language models (LLMs) in table reasoning. However, the essential task of transforming tables information into reports remains a significant challenge for industrial applications. This task is plagued by two critical issues: 1) the complexity and diversity of tables lead to suboptimal reasoning outcomes; and 2) existing table benchmarks lack the capacity to adequately assess the practical application of this task. To fill this gap, we propose the table-to-report task and construct a bilingual benchmark named T2R-bench, where the key information flow from the tables to the reports for this task. The benchmark comprises 457 industrial tables, all derived from real-world scenarios and encompassing 19 industry domains as well as 4 types of industrial tables. Furthermore, we propose an evaluation criteria to fairly measure the quality of report generation. The experiments on 25 widely-used LLMs reveal that even state-of-the-art models like Deepseek-R1 only achieves performance with 62.71 overall score, indicating that LLMs still have room for improvement on T2R-bench. Source code and data will be available after acceptance.