SpaceTools: Tool-Augmented Spatial Reasoning via Double Interactive RL
Siyi Chen, Mikaela Angelina Uy, Chan Hee Song, Faisal Ladhak, Adithyavairavan Murali, Qing Qu, Stan Birchfield, Valts Blukis, Jonathan Tremblay
2025-12-04
Summary
This paper introduces a new method, called Double Interactive Reinforcement Learning (DIRL), to help Vision Language Models (VLMs) become better at understanding and interacting with the physical world, specifically when it comes to spatial reasoning.
What's the problem?
While VLMs are good at generally understanding what they 'see' in images, they struggle with precise spatial tasks like figuring out exactly where objects are and how to manipulate them. Existing attempts to improve this involve giving the VLMs specific tools, like depth sensors or object detectors, but these usually require humans to carefully design how those tools are used, or limit the VLM to using only one tool at a time. Trying to get a VLM to intelligently choose *which* tools to use and *when* is a big challenge, especially when multiple tools are involved because the number of possibilities becomes huge.
What's the solution?
The researchers developed DIRL, a two-step training process. First, they teach the VLM by showing it examples of how to use each tool individually, and also examples of how to use all the tools together. Then, the VLM practices using the tools on its own, learning from its mistakes and improving its coordination skills through a process called reinforcement learning. This allows the VLM to learn the best way to combine different tools for specific spatial tasks without needing a human to tell it exactly what to do.
Why it matters?
This work is important because it allows VLMs to perform more complex tasks in the real world, like robot manipulation. By enabling VLMs to effectively use multiple tools, it moves us closer to creating AI agents that can truly understand and interact with their environment in a flexible and intelligent way, outperforming previous methods that relied on simpler approaches or limited toolsets.
Abstract
Vision Language Models (VLMs) demonstrate strong qualitative visual understanding, but struggle with metrically precise spatial reasoning required for embodied applications. The agentic paradigm promises that VLMs can use a wide variety of tools that could augment these capabilities, such as depth estimators, segmentation models, and pose estimators. Yet it remains an open challenge how to realize this vision without solely relying on handcrafted prompting strategies or enforcing fixed, predefined tool pipelines that limit VLMs' ability to discover optimal tool-use patterns. Reinforcement Learning could overcome this gap, but has so far been limited to reasoning with a single visual tool due to the large search space in multi-tool reasoning. We introduce Double Interactive Reinforcement Learning (DIRL), a two-phase training framework where VLMs learn to coordinate multiple tools through interactive exploration and feedback. In the teaching phase, we combine demonstrations from a single tool specialist trained via interactive RL with traces from a frontier model using all tools. In the exploration phase, the model further refines multi-tool coordination through continued RL. Our model, SpaceTools, with tool-augmented spatial reasoning ability, achieves state-of-the-art performance on spatial understanding benchmarks (RoboSpatial-Home, BLINK, BOP-ASK) and demonstrates reliable real-world manipulation using a 7-DOF robot as a tool. DIRL provides substantial improvements over the vanilla SFT (+12% on RoboSpatial) and RL (+16% on RoboSpatial) baselines. Project page: https://spacetools.github.io/.