< Explain other AI papers

CoverBench: A Challenging Benchmark for Complex Claim Verification

Alon Jacovi, Moran Ambar, Eyal Ben-David, Uri Shaham, Amir Feder, Mor Geva, Dror Marcus, Avi Caciularu

2024-08-07

CoverBench: A Challenging Benchmark for Complex Claim Verification

Summary

This paper introduces CoverBench, a new benchmark designed to evaluate how well language models can verify complex claims that require advanced reasoning skills.

What's the problem?

As language models (LMs) are increasingly used to answer difficult questions, it's important to ensure that their outputs are accurate. However, existing benchmarks for verifying LM outputs often focus on simpler tasks or specific areas, making it hard to assess their performance on more complex reasoning scenarios.

What's the solution?

CoverBench aims to fill this gap by providing a comprehensive evaluation framework that includes a wide variety of reasoning tasks across different domains, such as finance, medicine, and law. It features long inputs and multiple ways of presenting data, ensuring that the verification process is thorough and accurate. The authors carefully curated the dataset to minimize errors and included challenging examples to test the limits of current models. They also established baseline results to demonstrate how difficult CoverBench is compared to other benchmarks.

Why it matters?

CoverBench is significant because it helps improve the reliability of language models in real-world applications where accuracy is crucial. By focusing on complex claim verification, this benchmark can guide future research and development of more robust AI systems capable of handling intricate reasoning tasks.

Abstract

There is a growing line of research on verifying the correctness of language models' outputs. At the same time, LMs are being used to tackle complex queries that require reasoning. We introduce CoverBench, a challenging benchmark focused on verifying LM outputs in complex reasoning settings. Datasets that can be used for this purpose are often designed for other complex reasoning tasks (e.g., QA) targeting specific use-cases (e.g., financial tables), requiring transformations, negative sampling and selection of hard examples to collect such a benchmark. CoverBench provides a diversified evaluation for complex claim verification in a variety of domains, types of reasoning, relatively long inputs, and a variety of standardizations, such as multiple representations for tables where available, and a consistent schema. We manually vet the data for quality to ensure low levels of label noise. Finally, we report a variety of competitive baseline results to show CoverBench is challenging and has very significant headroom. The data is available at https://huggingface.co/datasets/google/coverbench .