LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks
Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xiaozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li
2024-12-20

Summary
This paper introduces LongBench v2, a new benchmark designed to test how well large language models (LLMs) can understand and reason through long texts in various real-world tasks. It includes 503 challenging multiple-choice questions that require deep comprehension.
What's the problem?
As LLMs are used more widely, there is a need to evaluate their ability to handle complex, long-context problems. Existing benchmarks often do not adequately test these capabilities, leaving a gap in understanding how well models can reason over large amounts of information.
What's the solution?
LongBench v2 addresses this issue by providing a comprehensive set of questions based on long texts, ranging from 8,000 to 2 million words. The questions cover six different task categories, such as answering questions from single and multiple documents, understanding long dialogues, and analyzing code repositories. The questions were developed with input from nearly 100 educated individuals to ensure high quality and difficulty. The results showed that even the best models had only moderate success, highlighting the need for improved reasoning abilities.
Why it matters?
This research is important because it sets a new standard for evaluating the reasoning capabilities of AI models when faced with complex and lengthy information. By improving how we assess these models, LongBench v2 can help develop smarter AI systems that better understand and process real-world tasks, ultimately enhancing their usefulness in applications like education, research, and customer service.
Abstract
This paper introduces LongBench v2, a benchmark designed to assess the ability of LLMs to handle long-context problems requiring deep understanding and reasoning across real-world multitasks. LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repository understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2. The project is available at https://longbench2.github.io.