SpaceControl: Introducing Test-Time Spatial Control to 3D Generative Modeling
Elisabetta Fedele, Francis Engelmann, Ian Huang, Or Litany, Marc Pollefeys, Leonidas Guibas
2025-12-08
Summary
This paper introduces a new technique called SpaceControl that lets you have much more direct control over the shapes of 3D objects created by computers, without needing to retrain the computer models.
What's the problem?
Currently, creating 3D models with AI relies heavily on describing what you want using text or images. However, text can be unclear when it comes to specific shapes, and editing images to get the exact 3D form you want is difficult and time-consuming. It's hard to precisely tell the AI *exactly* what shape to make.
What's the solution?
SpaceControl solves this by allowing you to directly input geometric information – basically, the shape itself – as a starting point. You can use simple shapes or even detailed 3D models as a guide. It works with existing AI models without needing any extra training, and you can adjust a setting to balance how closely the final result matches your input shape versus how realistic it looks. They even built a tool where you can edit basic shapes and instantly turn them into detailed 3D models.
Why it matters?
This is important because it makes creating 3D content much easier and more precise. Instead of struggling to describe a shape with words or images, artists and designers can directly manipulate the geometry, leading to faster workflows and more accurate results. It opens up possibilities for more interactive and intuitive 3D creation tools.
Abstract
Generative methods for 3D assets have recently achieved remarkable progress, yet providing intuitive and precise control over the object geometry remains a key challenge. Existing approaches predominantly rely on text or image prompts, which often fall short in geometric specificity: language can be ambiguous, and images are cumbersome to edit. In this work, we introduce SpaceControl, a training-free test-time method for explicit spatial control of 3D generation. Our approach accepts a wide range of geometric inputs, from coarse primitives to detailed meshes, and integrates seamlessly with modern pre-trained generative models without requiring any additional training. A controllable parameter lets users trade off between geometric fidelity and output realism. Extensive quantitative evaluation and user studies demonstrate that SpaceControl outperforms both training-based and optimization-based baselines in geometric faithfulness while preserving high visual quality. Finally, we present an interactive user interface that enables online editing of superquadrics for direct conversion into textured 3D assets, facilitating practical deployment in creative workflows. Find our project page at https://spacecontrol3d.github.io/