NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?
Mo Li, Songyang Zhang, Yunxin Liu, Kai Chen
2024-07-17

Summary
This paper introduces NeedleBench, a new framework designed to evaluate how well large language models (LLMs) can handle very long texts, up to 1 million tokens, while retrieving information and reasoning about it.
What's the problem?
Current LLMs often struggle with understanding and extracting relevant information from long documents. Most evaluations focus on shorter texts, which doesn't fully test the models' capabilities in real-world situations where they need to process extensive information. This limits our understanding of how well these models can perform complex tasks that require deep reasoning over long contexts.
What's the solution?
NeedleBench consists of a series of progressively challenging tasks that assess LLMs' abilities in bilingual long-context scenarios. It includes various length intervals and depth ranges to test how effectively models can find and use key information from long texts. Additionally, the Ancestral Trace Challenge (ATC) is introduced to simulate real-world logical reasoning tasks, allowing for a better evaluation of LLMs in complex situations. The results show that existing models have room for improvement in handling long-context applications.
Why it matters?
This research is important because it provides a structured way to assess the capabilities of LLMs in processing long texts, which is crucial for many practical applications like legal document analysis, research paper summarization, and more. By identifying the strengths and weaknesses of current models, NeedleBench can help guide future improvements in AI technology, making it more effective for users who rely on understanding large amounts of information.
Abstract
In evaluating the long-context capabilities of large language models (LLMs), identifying content relevant to a user's query from original long documents is a crucial prerequisite for any LLM to answer questions based on long text. We present NeedleBench, a framework consisting of a series of progressively more challenging tasks for assessing bilingual long-context capabilities, spanning multiple length intervals (4k, 8k, 32k, 128k, 200k, 1000k, and beyond) and different depth ranges, allowing the strategic insertion of critical data points in different text depth zones to rigorously test the retrieval and reasoning capabilities of models in diverse contexts. We use the NeedleBench framework to assess how well the leading open-source models can identify key information relevant to the question and apply that information to reasoning in bilingual long texts. Furthermore, we propose the Ancestral Trace Challenge (ATC) to mimic the complexity of logical reasoning challenges that are likely to be present in real-world long-context tasks, providing a simple method for evaluating LLMs in dealing with complex long-context situations. Our results suggest that current LLMs have significant room for improvement in practical long-context applications, as they struggle with the complexity of logical reasoning challenges that are likely to be present in real-world long-context tasks. All codes and resources are available at OpenCompass: https://github.com/open-compass/opencompass.