Tensor Logic: The Language of AI
Pedro Domingos
2025-10-15
Summary
This paper argues that current AI development is limited because we're trying to force existing programming languages to do things they weren't designed for, and proposes a new language called tensor logic to fix this.
What's the problem?
Right now, AI relies on tools built *on top* of languages like Python, which are good for general programming but not specifically for AI tasks. These additions, like PyTorch and TensorFlow, handle things like complex math and using powerful hardware, but they don't easily allow for things like logical reasoning or automatically gaining knowledge. Older AI languages *can* do reasoning, but they struggle with large datasets and learning from data. Essentially, we have a trade-off: powerful learning but no reasoning, or reasoning but limited learning.
What's the solution?
The paper introduces 'tensor logic,' a new programming language built on the idea that logical rules and a mathematical operation called Einstein summation are fundamentally the same. Everything in the language is built around this single concept, a 'tensor equation.' This allows the language to naturally handle both the mathematical calculations needed for neural networks *and* the logical reasoning needed for symbolic AI, all in one place. The author demonstrates how to build key AI components like transformers and reasoning systems within this framework.
Why it matters?
Tensor logic is important because it could bridge the gap between neural and symbolic AI, combining the strengths of both. This could lead to AI systems that are not only powerful and able to learn, but also reliable, transparent, and capable of making sound decisions, even when dealing with complex information represented in a mathematical format. This could unlock wider adoption of AI in areas where trust and explainability are crucial.
Abstract
Progress in AI is hindered by the lack of a programming language with all the requisite features. Libraries like PyTorch and TensorFlow provide automatic differentiation and efficient GPU implementation, but are additions to Python, which was never intended for AI. Their lack of support for automated reasoning and knowledge acquisition has led to a long and costly series of hacky attempts to tack them on. On the other hand, AI languages like LISP an Prolog lack scalability and support for learning. This paper proposes tensor logic, a language that solves these problems by unifying neural and symbolic AI at a fundamental level. The sole construct in tensor logic is the tensor equation, based on the observation that logical rules and Einstein summation are essentially the same operation, and all else can be reduced to them. I show how to elegantly implement key forms of neural, symbolic and statistical AI in tensor logic, including transformers, formal reasoning, kernel machines and graphical models. Most importantly, tensor logic makes new directions possible, such as sound reasoning in embedding space. This combines the scalability and learnability of neural networks with the reliability and transparency of symbolic reasoning, and is potentially a basis for the wider adoption of AI.