< Explain other AI papers

ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM

Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, Yu Cheng

2024-08-23

ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM

Summary

This paper presents ConflictBank, a new tool designed to evaluate how knowledge conflicts affect Large Language Models (LLMs), which can lead to inaccuracies in their responses.

What's the problem?

Large Language Models have made significant progress, but they often produce incorrect information, known as hallucinations. This happens when there's a conflict between what the model knows and the information it retrieves from other sources. However, there hasn't been a systematic way to study these knowledge conflicts.

What's the solution?

The authors developed ConflictBank to assess knowledge conflicts in LLMs from three perspectives: conflicts in retrieved knowledge, conflicts within the model's own knowledge, and how these two types of conflicts interact. They created a large dataset with over 7 million claim-evidence pairs and analyzed various LLMs to understand the causes and types of conflicts better.

Why it matters?

Understanding knowledge conflicts is crucial because it helps improve the reliability of LLMs. By identifying and addressing these conflicts, researchers can create more accurate AI systems that provide trustworthy information, which is essential for applications in education, healthcare, and many other fields.

Abstract

Large language models (LLMs) have achieved impressive advancements across numerous disciplines, yet the critical issue of knowledge conflicts, a major source of hallucinations, has rarely been studied. Only a few research explored the conflicts between the inherent knowledge of LLMs and the retrieved contextual knowledge. However, a thorough assessment of knowledge conflict in LLMs is still missing. Motivated by this research gap, we present ConflictBank, the first comprehensive benchmark developed to systematically evaluate knowledge conflicts from three aspects: (i) conflicts encountered in retrieved knowledge, (ii) conflicts within the models' encoded knowledge, and (iii) the interplay between these conflict forms. Our investigation delves into four model families and twelve LLM instances, meticulously analyzing conflicts stemming from misinformation, temporal discrepancies, and semantic divergences. Based on our proposed novel construction framework, we create 7,453,853 claim-evidence pairs and 553,117 QA pairs. We present numerous findings on model scale, conflict causes, and conflict types. We hope our ConflictBank benchmark will help the community better understand model behavior in conflicts and develop more reliable LLMs.