Even Small Reasoners Should Quote Their Sources: Introducing the Pleias-RAG Model Family
Pierre-Carl Langlais, Pavel Chizhov, Mattia Nee, Carlos Rosas Hinostroza, Matthieu Delsart, Irène Girard, Othman Hicheur, Anastasia Stasenko, Ivan P. Yamshchikov
2025-04-28
Summary
This paper talks about the Pleias-RAG model family, which are two new AI models designed to not only answer questions well but also show where their information comes from, even when working with different languages.
What's the problem?
The problem is that many AI models, especially smaller ones, can give answers without showing any proof or sources, which makes it hard to trust them or check if what they're saying is actually true. This is even more challenging when the questions and answers involve multiple languages.
What's the solution?
The researchers created two mid-sized models, Pleias-RAG-350m and Pleias-RAG-1B, that are good at reasoning and can provide citations for their answers. These models were tested on special benchmarks and showed they can compete with bigger models, while also supporting multilingual citation and making it easier to see where their information comes from.
Why it matters?
This matters because it helps make AI more trustworthy and transparent, allowing people to check the facts behind the answers, which is important for learning, research, and making decisions in different languages.
Abstract
Two new mid-sized reasoning models, Pleias-RAG-350m and Pleias-RAG-1B, perform competitively on RAG benchmarks and support multilingual citation and grounding.