Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts
Zain Muhammad Mujahid, Dilshod Azizov, Maha Tufail Agro, Preslav Nakov
2025-06-17
Summary
This paper talks about a new way to use large language models (LLMs), like AI programs, to figure out how truthful news media outlets are and what kinds of political biases they might have. The method uses carefully designed questions, called prompts, that mimic how professional fact-checkers evaluate news sources. By combining many LLM responses, the system predicts both the accuracy and political leaning of entire news outlets without needing to check each article by hand.
What's the problem?
The problem is that traditional techniques for assessing news media often take a lot of time and human effort, and can be influenced by personal bias. Also, existing automated methods struggle because labeling and analyzing individual news articles to find bias and factual errors is slow and costly. This makes it hard to keep up with the huge amount of news being created every day and to accurately measure the reliability of different media sources.
What's the solution?
The solution was to create a method that uses LLMs with special prompts based on the exact rules that experts use to check news sources. These prompts help the model give detailed answers about an outlet's bias and factuality. By gathering many responses from the LLMs and combining them intelligently, the system provides a more reliable prediction for entire media outlets instead of just single articles. The method was tested extensively and showed better results than previous approaches.
Why it matters?
This matters because knowing whether news media are truthful and fair is very important for people to make good decisions and trust the information they get. Using AI to do this kind of profiling quickly and accurately helps fight misinformation and biased reporting. It also makes it easier for fact-checkers and researchers to monitor media reliability on a large scale, which supports healthier public conversations and informed communities.
Abstract
A novel methodology using large language models with curated prompts improves predictions of media outlet factuality and political bias, validated through experiments and error analysis.