REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models
DongGeon Lee, Hwanjo Yu
2025-02-20
Summary
This paper talks about REFIND, a new system that helps detect when AI language models make up false information (hallucinations) in their answers. It's like a fact-checker for AI that works across many languages.
What's the problem?
AI language models sometimes give incorrect information when answering questions, especially for topics that require a lot of knowledge. This makes it hard to trust these AI systems for important tasks.
What's the solution?
The researchers created REFIND, which looks up information from reliable sources and compares it to what the AI says. They also invented a new way to measure how much the AI's answer changes when given correct information, called the Context Sensitivity Ratio (CSR). REFIND uses this to figure out which parts of the AI's answer might be made up. They tested REFIND on nine different languages and found it worked better than other methods at spotting false information.
Why it matters?
This matters because it could make AI language models more trustworthy and useful in real-world situations. By being able to detect when AI is making things up, we can create safer and more reliable AI systems that work well in many languages. This could help people use AI for important tasks without worrying about getting false information.
Abstract
Hallucinations in large language model (LLM) outputs severely limit their reliability in knowledge-intensive tasks such as question answering. To address this challenge, we introduce REFIND (Retrieval-augmented Factuality hallucINation Detection), a novel framework that detects hallucinated spans within LLM outputs by directly leveraging retrieved documents. As part of the REFIND, we propose the Context Sensitivity Ratio (CSR), a novel metric that quantifies the sensitivity of LLM outputs to retrieved evidence. This innovative approach enables REFIND to efficiently and accurately detect hallucinations, setting it apart from existing methods. In the evaluation, REFIND demonstrated robustness across nine languages, including low-resource settings, and significantly outperformed baseline models, achieving superior IoU scores in identifying hallucinated spans. This work highlights the effectiveness of quantifying context sensitivity for hallucination detection, thereby paving the way for more reliable and trustworthy LLM applications across diverse languages.