Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation
Joachim Baumann, Paul Röttger, Aleksandra Urman, Albert Wendsjö, Flor Miriam Plaza-del-Arco, Johannes B. Gruber, Dirk Hovy
2025-09-15
Summary
This paper investigates a problem with using powerful AI language models, like ChatGPT, in social science research. While these models can help with tasks like labeling data, the results they give aren't always consistent and can be easily influenced by how you ask the question or which model you use, potentially leading to wrong conclusions.
What's the problem?
Social science researchers are starting to use large language models to automate tasks, but the outputs from these models aren't reliable enough. Small changes in how the model is set up – like choosing a different model, slightly rewording the instructions, or adjusting how creative the model is allowed to be – can drastically change the results. This inconsistency introduces errors that can lead researchers to incorrectly support or reject their ideas, meaning they might think they've found something significant when they haven't, or miss something important. The authors call this issue 'LLM hacking' because the results are so easily manipulated.
What's the solution?
The researchers tested how easily these errors happen by repeating 37 different data labeling tasks from 21 existing studies, using 18 different language models. They generated over 13 million labels and then tested over 2,300 different research ideas to see if the choices researchers make when using these models affected the conclusions. They found that, even with the best models, about a third of the time the results led to incorrect conclusions, and with smaller models, it was half the time. They also looked at ways to fix the problem, like using human reviewers and different statistical techniques, but found that many common fixes didn't really help.
Why it matters?
This research is important because it shows that researchers need to be very careful when using AI language models. Just because a model gives an answer doesn't mean it's correct. The study highlights the need for more verification of results, especially when findings are close to being considered statistically significant, and emphasizes the value of having humans check the AI's work. It also demonstrates how easily someone could intentionally manipulate the results to support a desired outcome, which is a serious concern for the integrity of research.
Abstract
Large language models (LLMs) are rapidly transforming social science research by enabling the automation of labor-intensive tasks like data annotation and text analysis. However, LLM outputs vary significantly depending on the implementation choices made by researchers (e.g., model selection, prompting strategy, or temperature settings). Such variation can introduce systematic biases and random errors, which propagate to downstream analyses and cause Type I, Type II, Type S, or Type M errors. We call this LLM hacking. We quantify the risk of LLM hacking by replicating 37 data annotation tasks from 21 published social science research studies with 18 different models. Analyzing 13 million LLM labels, we test 2,361 realistic hypotheses to measure how plausible researcher choices affect statistical conclusions. We find incorrect conclusions based on LLM-annotated data in approximately one in three hypotheses for state-of-the-art models, and in half the hypotheses for small language models. While our findings show that higher task performance and better general model capabilities reduce LLM hacking risk, even highly accurate models do not completely eliminate it. The risk of LLM hacking decreases as effect sizes increase, indicating the need for more rigorous verification of findings near significance thresholds. Our extensive analysis of LLM hacking mitigation techniques emphasizes the importance of human annotations in reducing false positive findings and improving model selection. Surprisingly, common regression estimator correction techniques are largely ineffective in reducing LLM hacking risk, as they heavily trade off Type I vs. Type II errors. Beyond accidental errors, we find that intentional LLM hacking is unacceptably simple. With few LLMs and just a handful of prompt paraphrases, anything can be presented as statistically significant.