< Explain other AI papers

DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations

Aryo Pradipta Gema, Chen Jin, Ahmed Abdulaal, Tom Diethe, Philip Teare, Beatrice Alex, Pasquale Minervini, Amrutha Saseendran

2024-10-25

DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations

Summary

This paper presents DeCoRe, a new method designed to reduce hallucinations in large language models (LLMs) by contrasting outputs from models with different retrieval heads.

What's the problem?

Large language models often generate text that sounds plausible but is actually incorrect or misleading, a problem known as hallucination. This happens when the model misinterprets the context or recalls information inaccurately, leading to unreliable outputs.

What's the solution?

The authors propose a technique called Decoding by Contrasting Retrieval Heads (DeCoRe). This method involves masking certain parts of the model, specifically the retrieval heads that help extract relevant information. By comparing the outputs of the full model and the masked model, DeCoRe can identify and reduce hallucinations. The process uses conditional entropy to guide this comparison, allowing for more accurate and trustworthy responses from the model.

Why it matters?

This research is important because it addresses a significant issue in AI language models, which can impact their reliability in real-world applications like education, healthcare, and customer support. By improving how these models generate text and reducing inaccuracies, DeCoRe can enhance the overall quality and trustworthiness of AI-generated content.

Abstract

Large Language Models (LLMs) often hallucinate, producing unfaithful or factually incorrect outputs by misrepresenting the provided context or incorrectly recalling internal knowledge. Recent studies have identified specific attention heads within the Transformer architecture, known as retrieval heads, responsible for extracting relevant contextual information. We hypothesise that masking these retrieval heads can induce hallucinations and that contrasting the outputs of the base LLM and the masked LLM can reduce hallucinations. To this end, we propose Decoding by Contrasting Retrieval Heads (DeCoRe), a novel training-free decoding strategy that amplifies information found in the context and model parameters. DeCoRe mitigates potentially hallucinated responses by dynamically contrasting the outputs of the base LLM and the masked LLM, using conditional entropy as a guide. Our extensive experiments confirm that DeCoRe significantly improves performance on tasks requiring high contextual faithfulness, such as summarisation (XSum by 18.6%), instruction following (MemoTrap by 10.9%), and open-book question answering (NQ-Open by 2.4% and NQ-Swap by 5.5%).