< Explain other AI papers

Figure It Out: Improving the Frontier of Reasoning with Active Visual Thinking

Meiqi Chen, Fandong Meng, Jie Zhou

2026-01-01

Figure It Out: Improving the Frontier of Reasoning with Active Visual Thinking

Summary

This paper introduces a new approach called FIGR that helps computer programs solve complex reasoning problems, specifically those involving spatial or geometric ideas, by letting them 'think visually'.

What's the problem?

Current AI models are really good at processing text, but they struggle with problems where understanding the overall structure or relationships between things is key. If a problem requires visualizing how parts connect or fit together, simply reading the text isn't enough for these models to reliably find the correct answer. They have trouble keeping track of the 'big picture' when reasoning through multiple steps.

What's the solution?

FIGR tackles this by allowing the AI to create visual representations as it works through a problem. It's like sketching out a diagram to help you think. The AI learns *when* to create these visuals and *how* to use them to guide its reasoning process, using a technique called reinforcement learning. Essentially, it practices and gets better at deciding when a visual aid will be helpful. It doesn't just passively look at images; it actively generates them to support its thinking.

Why it matters?

This research shows that combining text understanding with visual reasoning significantly improves performance on challenging math problems. FIGR outperformed existing text-based AI models by a noticeable margin, proving that letting AI 'see' and visualize can make it much more stable and accurate when dealing with complex, structurally-based reasoning tasks. This is a step towards AI that can reason more like humans do.

Abstract

Complex reasoning problems often involve implicit spatial, geometric, and structural relationships that are not explicitly encoded in text. While recent reasoning models have achieved strong performance across many domains, purely text-based reasoning struggles to represent global structural constraints in complex settings. In this paper, we introduce FIGR, which integrates active visual thinking into multi-turn reasoning via end-to-end reinforcement learning. FIGR externalizes intermediate structural hypotheses by constructing visual representations during problem solving. By adaptively regulating when and how visual reasoning should be invoked, FIGR enables more stable and coherent reasoning over global structural properties that are difficult to capture from text alone. Experiments on challenging mathematical reasoning benchmarks demonstrate that FIGR outperforms strong text-only chain-of-thought baselines. In particular, FIGR improves the base model by 13.12% on AIME 2025 and 11.00% on BeyondAIME, highlighting the effectiveness of figure-guided multimodal reasoning in enhancing the stability and reliability of complex reasoning.