< Explain other AI papers

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking

Greta Warren, Irina Shklovski, Isabelle Augenstein

2025-02-18

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated
  Fact-Checking

Summary

This paper talks about how to make AI fact-checking tools more useful for professional fact-checkers by improving the way these tools explain their reasoning and decisions.

What's the problem?

As AI becomes more common in online media, there's a growing need for automated fact-checking to help human fact-checkers deal with the increasing amount of complex misinformation. However, current AI fact-checking systems don't explain their decisions in a way that aligns well with how human fact-checkers think and work, making it hard for professionals to trust and use these tools effectively.

What's the solution?

The researchers interviewed professional fact-checkers to understand how they work and what they need from AI tools. They looked at how fact-checkers evaluate evidence, make decisions, and explain their findings. They also studied how fact-checkers currently use automated tools and identified what kind of explanations these professionals need from AI fact-checking systems. Based on this, they determined important criteria for AI explanations, such as showing the reasoning process, citing specific evidence, and being clear about uncertainties.

Why it matters?

This research matters because it could lead to better AI fact-checking tools that human fact-checkers can trust and use more effectively. By making AI explanations more aligned with how professionals work, it could help combat the spread of misinformation more efficiently. This is crucial in today's world where false information can spread quickly online and have serious consequences for society.

Abstract

The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.