The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers
Saad Obaid ul Islam, Anne Lauscher, Goran Glavaš
2025-10-14
Summary
This paper investigates a surprising flaw in large language models (LLMs): they can answer factual questions directly, but often get the same facts wrong when asked to use them while writing something more complex, like a biography. It highlights a gap between knowing information and being able to reliably *use* that information.
What's the problem?
LLMs are really good at answering simple questions like “When was Einstein born?” on tests, but we don’t really know if they’re consistently reliable when asked to use that same information in a more complicated way, like writing a paragraph *about* Einstein. This inconsistency makes it hard to trust these models because their accuracy seems to change depending on how the question is asked. Essentially, just because a model knows a fact in isolation doesn’t mean it will remember it when it needs to apply it to a larger task.
What's the solution?
The researchers created a special testing method called SLAQ, which stands for Short-Long Form Alignment for Factual Question Answering. They asked 16 different LLMs the same 600 factual questions in two ways: first, as a direct question (the 'short' form), and then embedded within a more complex question or writing prompt (the 'long' form). They then compared the answers. They found that models frequently gave different answers to the same fact depending on whether it was asked directly or within a longer context, and noticed patterns where correct or incorrect answers tended to continue in a sequence. They also looked *inside* the models to see how they process information and found that when answers were consistent, the same parts of the model were activated, and they could predict consistency with pretty good accuracy.
Why it matters?
This research is important because it shows that current ways of testing LLMs might be misleading. We tend to think if a model passes a simple fact-checking test, it’s reliable, but this study proves that’s not always true. It emphasizes that factual consistency – getting the facts right no matter how the question is asked – is crucial for building trustworthy AI systems. It suggests we need better evaluation methods that test how well models can actually *use* knowledge, not just recall it.
Abstract
Large language models (LLMs) can correctly answer "When was Einstein born?" yet fail to provide the same date when writing about Einstein's life revealing a fundamental inconsistency in how models access factual knowledge across task complexities. While models display impressive accuracy on factual question-answering benchmarks, the reliability gap between simple and complex queries remains poorly understood, eroding their trustworthiness. In this work, we introduce Short-Long Form Alignment for Factual Question Answering (SLAQ), a controlled evaluation framework that compares LLMs' answers to the same factual questions asked (a) in isolation (short) vs. (b) integrated into complex queries (long). Looking at 16 LLMs across 600 queries, we find a systematic misalignment of answers to the corresponding short and long queries. We further uncover position-dependent accuracy loss and momentum effects where consecutive correct or incorrect answers create self-reinforcing patterns. Through mechanistic analysis, we find that aligned facts activate overlapping model internals, and that metrics based on mechanistic similarity can predict short-long answer alignment with up to 78% accuracy. Our work establishes factual consistency over query complexity as an important aspect of LLMs' trustworthiness and challenges current evaluation practices, which implicitly assume that good performance for simple factual queries implies reliability in more complex knowledge-seeking tasks too.