< Explain other AI papers

VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space

Lin Li, Zehuan Huang, Haoran Feng, Gengxiong Zhuang, Rui Chen, Chunchao Guo, Lu Sheng

2025-08-27

VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space

Summary

This paper introduces a new technique called VoxHammer for editing 3D models, specifically focusing on making changes to certain parts of a model while keeping the rest intact.

What's the problem?

Currently, when people try to edit 3D models using images from multiple viewpoints, it's hard to make precise changes without messing up the parts of the model they *don't* want to edit. Existing methods often struggle to maintain the original shape and consistency of the unedited areas, and the edited parts don't always blend in seamlessly.

What's the solution?

VoxHammer works by taking a 3D model and figuring out how it's represented internally within a generative model – think of it like understanding the 'code' that creates the model. Then, when you want to edit something, it cleverly keeps the original 'code' for the parts you *don't* want to change, and only modifies the 'code' for the areas you *do* want to edit. This ensures that the unedited parts stay consistent and the new edits fit in naturally with the rest of the model. They also created a new dataset, Edit3D-Bench, to specifically test how well methods preserve the unedited parts of a 3D model.

Why it matters?

This research is important because it improves the ability to edit 3D models accurately, which is crucial for industries like video game development and robotics. Being able to precisely modify 3D objects opens up possibilities for creating better game assets and allowing robots to interact with the world more effectively, and it also helps create data needed to build even more advanced 3D generation tools.

Abstract

3D local editing of specified regions is crucial for game industry and robot interaction. Recent methods typically edit rendered multi-view images and then reconstruct 3D models, but they face challenges in precisely preserving unedited regions and overall coherence. Inspired by structured 3D generative models, we propose VoxHammer, a novel training-free approach that performs precise and coherent editing in 3D latent space. Given a 3D model, VoxHammer first predicts its inversion trajectory and obtains its inverted latents and key-value tokens at each timestep. Subsequently, in the denoising and editing phase, we replace the denoising features of preserved regions with the corresponding inverted latents and cached key-value tokens. By retaining these contextual features, this approach ensures consistent reconstruction of preserved areas and coherent integration of edited parts. To evaluate the consistency of preserved regions, we constructed Edit3D-Bench, a human-annotated dataset comprising hundreds of samples, each with carefully labeled 3D editing regions. Experiments demonstrate that VoxHammer significantly outperforms existing methods in terms of both 3D consistency of preserved regions and overall quality. Our method holds promise for synthesizing high-quality edited paired data, thereby laying the data foundation for in-context 3D generation. See our project page at https://huanngzh.github.io/VoxHammer-Page/.