Collapse of Dense Retrievers: Short, Early, and Literal Biases Outranking Factual Evidence
Mohsen Fayyaz, Ali Modarressi, Hinrich Schuetze, Nanyun Peng
2025-03-12
Summary
This paper talks about how AI search tools sometimes pick documents for the wrong reasons, like choosing shorter ones or those that repeat words, even if they don’t have the right answers.
What's the problem?
AI search tools often focus on easy shortcuts like short length or repeated keywords instead of checking if the document actually answers the question, leading to bad results.
What's the solution?
Researchers tested these tools using special experiments to show how they fail and highlighted the need for better training to focus on actual answers instead of surface-level clues.
Why it matters?
This helps improve AI tools like chatbots or research assistants by making sure they find accurate info, avoiding mistakes that could mislead people.
Abstract
Dense retrieval models are commonly used in Information Retrieval (IR) applications, such as Retrieval-Augmented Generation (RAG). Since they often serve as the first step in these systems, their robustness is critical to avoid failures. In this work, by repurposing a relation extraction dataset (e.g. Re-DocRED), we design controlled experiments to quantify the impact of heuristic biases, such as favoring shorter documents, in retrievers like Dragon+ and Contriever. Our findings reveal significant vulnerabilities: retrievers often rely on superficial patterns like over-prioritizing document beginnings, shorter documents, repeated entities, and literal matches. Additionally, they tend to overlook whether the document contains the query's answer, lacking deep semantic understanding. Notably, when multiple biases combine, models exhibit catastrophic performance degradation, selecting the answer-containing document in less than 3% of cases over a biased document without the answer. Furthermore, we show that these biases have direct consequences for downstream applications like RAG, where retrieval-preferred documents can mislead LLMs, resulting in a 34% performance drop than not providing any documents at all.