< Explain other AI papers

Do RAG Systems Suffer From Positional Bias?

Florin Cuconasu, Simone Filice, Guy Horowitz, Yoelle Maarek, Fabrizio Silvestri

2025-05-28

Do RAG Systems Suffer From Positional Bias?

Summary

This paper talks about whether Retrieval Augmented Generation (RAG) systems, which are AI models that use extra information from documents to answer questions, are affected by where that information shows up in the list of results.

What's the problem?

The problem is that people thought these AI systems might pay too much attention to the first few pieces of information they get, which could make their answers less accurate if the best info is actually further down the list.

What's the solution?

The researchers found that RAG systems actually get distracted by the top-ranked passages themselves, so the usual idea of 'positional bias'—where the model just focuses on the first things it sees—isn't as big of a problem as people thought. Instead, the main issue is that these systems can be thrown off by whatever is ranked highest, even if it's not the most relevant.

Why it matters?

This is important because it helps researchers and developers understand what really affects the accuracy of AI answers, so they can design better systems that don't get distracted by the wrong information.

Abstract

Retrieval Augmented Generation suffers from high distraction from top-ranked passages, rendering LLM positional bias less impactful than previously thought.