< Explain other AI papers

Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search

Xin Lai, Junyi Li, Wei Li, Tao Liu, Tianjian Li, Hengshuang Zhao

2025-09-10

Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search

Summary

This paper focuses on improving how artificial intelligence systems solve complex visual problems by allowing them to 'think' through a problem using tools, similar to how a person might explore and test different ideas.

What's the problem?

Current AI systems that use tools to solve visual tasks often get stuck in repetitive thought patterns and can't handle problems that require a lot of trial and error. They also have a limited number of steps they can take before giving up, making them ineffective for truly challenging tasks. Essentially, they don't have the patience or flexibility to explore solutions deeply.

What's the solution?

The researchers created a system called Mini-o3 that can perform much more extensive, multi-step reasoning – sometimes taking dozens of steps to solve a problem. They did this in three main ways: first, they built a new dataset of difficult visual search problems. Second, they developed a way to train the AI to explore different reasoning strategies, like systematically checking options or trying things out until they work. Finally, they changed how the AI learns, so it doesn't get penalized for taking a lot of steps, encouraging it to explore more thoroughly. The model is trained with a limit of six steps, but can naturally extend to many more when actually solving problems.

Why it matters?

This work is important because it shows how to build AI systems that can tackle more complex visual tasks by allowing them to reason more deeply and explore more possibilities. This is a step towards AI that can solve problems more like humans do, by trying different approaches and learning from their mistakes, and it improves performance on challenging visual search tasks.

Abstract

Recent advances in large multimodal models have leveraged image-based tools with reinforcement learning to tackle visual problems. However, existing open-source approaches often exhibit monotonous reasoning patterns and allow only a limited number of interaction turns, making them inadequate for difficult tasks that require trial-and-error exploration. In this work, we address this limitation by scaling up tool-based interactions and introduce Mini-o3, a system that executes deep, multi-turn reasoning -- spanning tens of steps -- and achieves state-of-the-art performance on challenging visual search tasks. Our recipe for reproducing OpenAI o3-style behaviors comprises three key components. First, we construct the Visual Probe Dataset, a collection of thousands of challenging visual search problems designed for exploratory reasoning. Second, we develop an iterative data collection pipeline to obtain cold-start trajectories that exhibit diverse reasoning patterns, including depth-first search, trial-and-error, and goal maintenance. Third, we propose an over-turn masking strategy that prevents penalization of over-turn responses (those that hit the maximum number of turns) during reinforcement learning, thereby balancing training-time efficiency with test-time scalability. Despite training with an upper bound of only six interaction turns, our model generates trajectories that naturally scale to tens of turns at inference time, with accuracy improving as the number of turns increases. Extensive experiments demonstrate that Mini-o3 produces rich reasoning patterns and deep thinking paths, effectively solving challenging visual search problems.