Progressive Training for Explainable Citation-Grounded Dialogue: Reducing Hallucination to Zero in English-Hindi LLMs
Vedant Pandya
2026-03-24
Summary
This research focuses on building better chatbot systems that can have informed conversations, drawing on information from external sources like a knowledge base. The work specifically addresses limitations in current systems, which mostly work only with English and don't clearly show *where* they got their information, making it hard to trust their answers.
What's the problem?
Current knowledge-grounded dialogue systems, or chatbots that use external knowledge, have several issues. They are largely limited to the English language, they don't usually provide clear citations to back up their claims, and it's difficult to understand *why* the chatbot said what it did. This makes it hard to verify the information they provide and trust their responses. Essentially, they can 'hallucinate' or make things up without any basis in fact, and we don't know how to fix it.
What's the solution?
The researchers developed a new training process called XKD-Dial, which stands for Explainable Knowledge-Dialouge. This process has four steps: first, it adapts the system to work with multiple languages (English and Hindi). Second, it trains the system on English conversations while forcing it to cite its sources. Third, it expands this training to include conversations in both English and Hindi. Finally, it uses a technique called GRPO to further refine the system, rewarding it for good citation practices. They tested different sizes of models throughout this process and analyzed how the system learned to use citations.
Why it matters?
This work is important because it moves towards building more reliable and trustworthy chatbots. By supporting multiple languages, requiring citations, and providing insights into the chatbot's reasoning, this research helps address key limitations in current dialogue systems. The fact that they reduced 'hallucinations' to zero for some models is a big step forward, and the ability to build effective systems even with smaller models makes this technology more accessible.
Abstract
Knowledge-grounded dialogue systems aim to generate informative, contextually relevant responses by conditioning on external knowledge sources. However, most existing approaches focus exclusively on English, lack explicit citation mechanisms for verifying factual claims, and offer limited transparency into model decision-making. We present XKD-Dial, a progressive four-stage training pipeline for explainable, knowledge-grounded dialogue generation in a bilingual (English-Hindi) setting, comprising: (1) multilingual adaptation, (2) English dialogue SFT with citation grounding, (3) bilingual dialogue SFT, and (4) GRPO alignment with citation-aware rewards. We evaluate six models spanning encoder-decoder (250M-3B) and decoder-only (1B-7B) architectures at every pipeline stage. Our key contributions are: (i) three post-hoc explainability analyses - cross-attention alignment, Integrated Gradients attribution, and occlusion-based causal grounding - applied systematically across the training trajectory to reveal how citation behaviour is learned, not only whether it is learned; (ii) citation-grounded SFT reduces hallucination to 0.0% for encoder-decoder models from Stage 2 onward; (iii) the progressive pipeline prevents catastrophic forgetting while improving Hindi capabilities; (iv) smaller models match larger models on English after SFT; and (v) GRPO provides marginal improvement over well-designed SFT for structured citation tasks. We evaluate across six automatic metrics (BLEU, ROUGE, BERTScore, FactScore, Citation-F1, and hallucination rate).