Benchmarking Information Retrieval Models on Complex Retrieval Tasks
Julian Killingback, Hamed Zamani
2025-09-10
Summary
This paper investigates how well computer systems can find information when asked complicated questions, going beyond simple searches. It focuses on 'retrieval models,' which are the parts of search engines that actually locate relevant documents.
What's the problem?
Current search systems are really good at answering straightforward questions, but people often ask more complex ones with multiple requirements or constraints. Existing tests for search systems don't accurately reflect these real-world, complex queries, making it hard to know if search technology is truly improving in handling them. There's a lack of good resources to properly evaluate how well these systems perform on these more challenging tasks.
What's the solution?
The researchers created a new, diverse set of complex search tasks that are more realistic than previous tests. They then tested several state-of-the-art retrieval models on these tasks to see how well they performed. They also experimented with using large language models to reword or expand the search queries to see if that improved results.
Why it matters?
This work is important because it highlights that even the best search systems still struggle with complex questions. It provides a better way to measure progress in this area and suggests that simply using language models to change the queries doesn't always help, and can even sometimes make things worse. This research pushes the field towards building better search technology that can handle the kinds of detailed requests people actually make.
Abstract
Large language models (LLMs) are incredible and versatile tools for text-based tasks that have enabled countless, previously unimaginable, applications. Retrieval models, in contrast, have not yet seen such capable general-purpose models emerge. To achieve this goal, retrieval models must be able to perform complex retrieval tasks, where queries contain multiple parts, constraints, or requirements in natural language. These tasks represent a natural progression from the simple, single-aspect queries that are used in the vast majority of existing, commonly used evaluation sets. Complex queries naturally arise as people expect search systems to handle more specific and often ambitious information requests, as is demonstrated by how people use LLM-based information systems. Despite the growing desire for retrieval models to expand their capabilities in complex retrieval tasks, there exist limited resources to assess the ability of retrieval models on a comprehensive set of diverse complex tasks. The few resources that do exist feature a limited scope and often lack realistic settings making it hard to know the true capabilities of retrieval models on complex real-world retrieval tasks. To address this shortcoming and spur innovation in next-generation retrieval models, we construct a diverse and realistic set of complex retrieval tasks and benchmark a representative set of state-of-the-art retrieval models. Additionally, we explore the impact of LLM-based query expansion and rewriting on retrieval quality. Our results show that even the best models struggle to produce high-quality retrieval results with the highest average nDCG@10 of only 0.346 and R@100 of only 0.587 across all tasks. Although LLM augmentation can help weaker models, the strongest model has decreased performance across all metrics with all rewriting techniques.