< Explain other AI papers

Less is More: Recursive Reasoning with Tiny Networks

Alexia Jolicoeur-Martineau

2025-10-08

Less is More: Recursive Reasoning with Tiny Networks

Summary

This paper introduces a new, very small AI model called the Tiny Recursive Model, or TRM, that surprisingly performs well on challenging reasoning puzzles, even better than much larger AI models.

What's the problem?

Existing AI models, especially large language models, require massive amounts of data and computing power to solve complex problems like puzzles. A previous attempt to build a smaller, reasoning-focused model called the Hierarchical Reasoning Model (HRM) showed promise but wasn't fully optimized and didn't generalize well to new puzzles. The challenge was to create a small model that could effectively reason and solve these hard problems without needing huge resources.

What's the solution?

The researchers created TRM, which is significantly simpler than HRM. Instead of using two separate networks working at different speeds, TRM uses just *one* very small network with only two layers and a mere 7 million parameters. It works by repeatedly applying this single network to the problem, essentially 'thinking' through the puzzle step-by-step. This recursive approach allows it to achieve high accuracy on difficult puzzle datasets like ARC-AGI.

Why it matters?

This work is important because it demonstrates that strong reasoning abilities don't necessarily require enormous AI models. TRM achieves better results than many much larger models, using a tiny fraction of the computational resources. This opens the door to developing powerful AI that can run on less powerful hardware, making it more accessible and energy-efficient, and potentially leading to new AI applications in resource-constrained environments.

Abstract

Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal. We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers. With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.