< Explain other AI papers

Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

Amirhosein Ghasemabadi, Di Niu

2026-01-06

Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

Summary

This paper explores how to make large language models (LLMs) more aware of their own mistakes, specifically when they 'hallucinate' or generate incorrect information.

What's the problem?

LLMs are really good at *sounding* confident and creating realistic text, but they often get things wrong without realizing it. Current methods for checking their work either require a lot of extra computing power by using other programs to judge them, or they aren't very reliable at actually identifying errors. Basically, we need a way for LLMs to self-check without being too slow or inaccurate.

What's the solution?

The researchers developed a system called Gnosis. It works by looking at the internal 'thinking' of the LLM *while* it's generating text – specifically, the hidden states and attention patterns. Gnosis doesn't change the LLM itself (it's 'frozen'), but adds a small extra component that analyzes these internal signals and predicts whether the LLM's output will be correct. It's like giving the LLM a little 'gut feeling' detector. This added component is very small and doesn't significantly slow down the process, and it works regardless of how long the text being generated is.

Why it matters?

This research is important because it shows that LLMs already contain clues about their own reliability within their internal processes. Gnosis can tap into these clues efficiently, meaning we might be able to build more trustworthy AI systems without needing massive amounts of extra computing power or external verification. It also allows for stopping a generation early if the model detects it's going down the wrong path, saving resources.

Abstract

Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.