Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
Bradley P. Allen, Prateek Chhikara, Thomas Macaulay Ferguson, Filip Ilievski, Paul Groth
2025-07-15
Summary
This paper talks about a new method that combines large language models with formal logic to make AI reasoning more reliable and accurate, even when faced with contradictory information.
What's the problem?
The problem is that large language models sometimes produce answers that are logically inconsistent or wrong because they lack proper reasoning abilities and struggle with conflicting information.
What's the solution?
The solution is to use a special kind of logic called paraconsistent logic together with language models. This allows the AI to reason in a way that handles contradictions without breaking the logic, maintaining both soundness (no false conclusions) and completeness (able to find all true conclusions). The method grounds the interpretations of the language model in this logic, integrating neural learning with symbolic reasoning.
Why it matters?
This matters because it helps AI systems think more clearly and correctly, especially in situations where information might be incomplete or conflicting. It improves trust in AI decisions and opens doors for better applications in areas like law, medicine, and scientific research.
Abstract
A method integrates large language models into formal semantics for paraconsistent logic, enabling neuro-symbolic reasoning while maintaining logical soundness and completeness.