Truth or Mirage? Towards End-to-End Factuality Evaluation with LLM-OASIS
Alessandro Scirè, Andrei Stefan Bejgu, Simone Tedeschi, Karim Ghonim, Federico Martelli, Roberto Navigli
2024-12-04

Summary
This paper discusses LLM-OASIS, a new method for evaluating the factual accuracy of large language models (LLMs) by creating a comprehensive dataset that helps identify when these models generate incorrect information.
What's the problem?
While large language models have improved in generating human-like text, they often produce false or misleading information, known as 'hallucinations.' This is a significant issue because it can lead to the spread of incorrect facts. Current methods for evaluating the accuracy of these models have limitations, such as being too narrow, not having enough data, or only focusing on simpler tasks.
What's the solution?
To tackle these problems, the researchers introduced LLM-OASIS, which is the largest resource available for training models to evaluate factuality. They built this dataset by extracting claims from Wikipedia, intentionally creating some false claims, and pairing them with true ones. Human annotators helped ensure the quality of this dataset and created a benchmark test for evaluating how well other models can assess factual accuracy. Their experiments showed that LLM-OASIS effectively challenges state-of-the-art models, revealing their limitations in factual accuracy.
Why it matters?
This research is important because it provides a new way to evaluate and improve the reliability of large language models. By focusing on factuality and creating a robust dataset for testing, LLM-OASIS aims to enhance the performance of AI systems in generating accurate information. This can help ensure that AI technologies are more trustworthy and beneficial in various applications, such as education, journalism, and everyday decision-making.
Abstract
After the introduction of Large Language Models (LLMs), there have been substantial improvements in the performance of Natural Language Generation (NLG) tasks, including Text Summarization and Machine Translation. However, LLMs still produce outputs containing hallucinations, that is, content not grounded in factual information. Therefore, developing methods to assess the factuality of LLMs has become urgent. Indeed, resources for factuality evaluation have recently emerged. Although challenging, these resources face one or more of the following limitations: (i) they are tailored to a specific task or domain; (ii) they are limited in size, thereby preventing the training of new factuality evaluators; (iii) they are designed for simpler verification tasks, such as claim verification. To address these issues, we introduce LLM-Oasis, to the best of our knowledge the largest resource for training end-to-end factuality evaluators. LLM-Oasis is constructed by extracting claims from Wikipedia, falsifying a subset of these claims, and generating pairs of factual and unfactual texts. We then rely on human annotators to both validate the quality of our dataset and to create a gold standard test set for benchmarking factuality evaluation systems. Our experiments demonstrate that LLM-Oasis presents a significant challenge for state-of-the-art LLMs, with GPT-4o achieving up to 60% accuracy in our proposed end-to-end factuality evaluation task, highlighting its potential to drive future research in the field.