< Explain other AI papers

The Geometry of Reasoning: Flowing Logics in Representation Space

Yufa Zhou, Yixiao Wang, Xunjian Yin, Shuyan Zhou, Anru R. Zhang

2025-10-15

The Geometry of Reasoning: Flowing Logics in Representation Space

Summary

This paper investigates how large language models, like those powering chatbots, actually 'think' when they're reasoning through problems. It proposes a new way to visualize and understand their internal processes, framing reasoning as a kind of movement or 'flow' within the model's complex mathematical space.

What's the problem?

It's difficult to understand *how* large language models arrive at answers. We know they can often perform logical tasks, but it's unclear if they truly understand the underlying logic or are just recognizing patterns in the way the questions are worded. Essentially, we don't know if they're reasoning or just mimicking reasoning.

What's the solution?

The researchers developed a geometric framework where reasoning is modeled as a continuous flow of information within the language model. They tested this by giving the model logical problems with different wording but the same core logic. By observing how the model's internal representations changed as it processed these problems, they could see if the model was focusing on the logic itself, rather than just the specific words used. They used mathematical concepts like position, speed, and curvature to analyze these 'flows' and see if they aligned with logical reasoning.

Why it matters?

This work is important because it provides a new way to peek inside the 'black box' of large language models. By understanding how these models reason, we can build more reliable and trustworthy AI systems. It also gives us tools to formally analyze their behavior and improve their ability to solve complex problems, ultimately making them more interpretable and less prone to errors.

Abstract

We study how large language models (LLMs) ``think'' through their representation space. We propose a novel geometric framework that models an LLM's reasoning as flows -- embedding trajectories evolving where logic goes. We disentangle logical structure from semantics by employing the same natural deduction propositions with varied semantic carriers, allowing us to test whether LLMs internalize logic beyond surface form. This perspective connects reasoning with geometric quantities such as position, velocity, and curvature, enabling formal analysis in representation and concept spaces. Our theory establishes: (1) LLM reasoning corresponds to smooth flows in representation space, and (2) logical statements act as local controllers of these flows' velocities. Using learned representation proxies, we design controlled experiments to visualize and quantify reasoning flows, providing empirical validation of our theoretical framework. Our work serves as both a conceptual foundation and practical tools for studying reasoning phenomenon, offering a new lens for interpretability and formal analysis of LLMs' behavior.