Stemming Hallucination in Language Models Using a Licensing Oracle
Simeon Emanuilov, Richard Ackermann
2025-11-13
Summary
This paper tackles the issue of language models confidently stating things that aren't true – what's called 'hallucination'. It introduces a new system called the Licensing Oracle that aims to make sure a language model only says things that can be verified as facts.
What's the problem?
Large language models are really good at *sounding* correct when they generate text, but they often make up information. They can create fluent and grammatically perfect sentences that are completely false. Simply making the models bigger or training them on more data doesn't reliably fix this problem, and existing methods like trying to teach them to say 'I don't know' aren't perfect either.
What's the solution?
The researchers built a system, the Licensing Oracle, that works like a fact-checker *inside* the language model. Before the model says something, the Oracle checks if that statement is actually true based on a structured database of facts. If the Oracle doesn't confirm it, the model doesn't say it. This isn't about probabilities or guessing; it's a hard rule that ensures everything the model outputs is verifiable.
Why it matters?
This work is important because it shows a way to *guarantee* factual accuracy in language models, something that simply scaling up the model or using statistical tricks can't achieve. It's a step towards building AI systems that are truly reliable and trustworthy, especially in situations where correct information is critical. The idea of building in a 'truth check' could be applied to other AI systems beyond just language models.
Abstract
Language models exhibit remarkable natural language generation capabilities but remain prone to hallucinations, generating factually incorrect information despite producing syntactically coherent responses. This study introduces the Licensing Oracle, an architectural solution designed to stem hallucinations in LMs by enforcing truth constraints through formal validation against structured knowledge graphs. Unlike statistical approaches that rely on data scaling or fine-tuning, the Licensing Oracle embeds a deterministic validation step into the model's generative process, ensuring that only factually accurate claims are made. We evaluated the effectiveness of the Licensing Oracle through experiments comparing it with several state-of-the-art methods, including baseline language model generation, fine-tuning for factual recall, fine-tuning for abstention behavior, and retrieval-augmented generation (RAG). Our results demonstrate that although RAG and fine-tuning improve performance, they fail to eliminate hallucinations. In contrast, the Licensing Oracle achieved perfect abstention precision (AP = 1.0) and zero false answers (FAR-NE = 0.0), ensuring that only valid claims were generated with 89.1% accuracy in factual responses. This work shows that architectural innovations, such as the Licensing Oracle, offer a necessary and sufficient solution for hallucinations in domains with structured knowledge representations, offering guarantees that statistical methods cannot match. Although the Licensing Oracle is specifically designed to address hallucinations in fact-based domains, its framework lays the groundwork for truth-constrained generation in future AI systems, providing a new path toward reliable, epistemically grounded models.