Feedforward 3D Editing via Text-Steerable Image-to-3D
Ziqi Ma, Hongqiao Chen, Yisong Yue, Georgia Gkioxari
2025-12-17
Summary
This paper introduces Steer3D, a new method for easily editing 3D models created by artificial intelligence using simple text commands.
What's the problem?
Creating 3D models with AI is getting really good, but once a model is made, it's hard to make specific changes to it without starting over. Existing methods for editing these AI-generated 3D objects are often slow or don't accurately reflect the changes you want to make based on text instructions, and they sometimes mess up the original design.
What's the solution?
The researchers developed Steer3D, which works by adding a 'steering' mechanism to existing image-to-3D AI models. It's inspired by a technique called ControlNet and allows you to directly influence the 3D generation process with text prompts during a single pass. They also created a system to automatically generate lots of training data and used a two-step training process to make it work well. This makes Steer3D faster and more accurate at following text instructions while keeping the original 3D model consistent.
Why it matters?
This work is important because it makes AI-generated 3D models much more practical for things like design, virtual reality, and robotics. Being able to quickly and easily edit these models with text means designers and developers can refine and customize them without needing advanced 3D modeling skills or spending a lot of time on manual adjustments.
Abstract
Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications, a critical requirement is the capability to edit them easily. We present a feedforward method, Steer3D, to add text steerability to image-to-3D models, which enables editing of generated 3D assets with language. Our approach is inspired by ControlNet, which we adapt to image-to-3D generation to enable text steering directly in a forward pass. We build a scalable data engine for automatic data generation, and develop a two-stage training recipe based on flow-matching training and Direct Preference Optimization (DPO). Compared to competing methods, Steer3D more faithfully follows the language instruction and maintains better consistency with the original 3D asset, while being 2.4x to 28.5x faster. Steer3D demonstrates that it is possible to add a new modality (text) to steer the generation of pretrained image-to-3D generative models with 100k data. Project website: https://glab-caltech.github.io/steer3d/