Towards Realistic Example-based Modeling via 3D Gaussian Stitching
Xinyu Gao, Ziyi Yang, Bingchen Gong, Xiaoguang Han, Sipeng Yang, Xiaogang Jin
2024-08-29

Summary
This paper discusses a new method called 3D Gaussian Stitching, which helps create realistic 3D models by combining parts from existing models more effectively.
What's the problem?
Creating realistic 3D models from real-world scenes can be difficult because traditional methods often focus only on the shapes of objects. This makes it hard to blend different parts seamlessly, leading to unrealistic results when trying to combine multiple models into one cohesive scene.
What's the solution?
The authors propose a method that uses 3D Gaussian Splatting to improve how different parts of models are stitched together. They developed a graphical user interface (GUI) that allows users to easily segment and transform these models in real-time. The process involves three main steps: first, users can manipulate the Gaussian model; second, they analyze the boundaries where models intersect; and third, they optimize the final model to blend textures and details smoothly. This approach allows for better editing and more realistic results compared to previous methods.
Why it matters?
This research is important because it enhances the ability to create high-quality 3D graphics for applications in gaming, virtual reality, and film. By improving how models are combined and edited, it enables artists and developers to produce more lifelike environments and characters, making digital experiences more immersive.
Abstract
Using parts of existing models to rebuild new models, commonly termed as example-based modeling, is a classical methodology in the realm of computer graphics. Previous works mostly focus on shape composition, making them very hard to use for realistic composition of 3D objects captured from real-world scenes. This leads to combining multiple NeRFs into a single 3D scene to achieve seamless appearance blending. However, the current SeamlessNeRF method struggles to achieve interactive editing and harmonious stitching for real-world scenes due to its gradient-based strategy and grid-based representation. To this end, we present an example-based modeling method that combines multiple Gaussian fields in a point-based representation using sample-guided synthesis. Specifically, as for composition, we create a GUI to segment and transform multiple fields in real time, easily obtaining a semantically meaningful composition of models represented by 3D Gaussian Splatting (3DGS). For texture blending, due to the discrete and irregular nature of 3DGS, straightforwardly applying gradient propagation as SeamlssNeRF is not supported. Thus, a novel sampling-based cloning method is proposed to harmonize the blending while preserving the original rich texture and content. Our workflow consists of three steps: 1) real-time segmentation and transformation of a Gaussian model using a well-tailored GUI, 2) KNN analysis to identify boundary points in the intersecting area between the source and target models, and 3) two-phase optimization of the target model using sampling-based cloning and gradient constraints. Extensive experimental results validate that our approach significantly outperforms previous works in terms of realistic synthesis, demonstrating its practicality. More demos are available at https://ingra14m.github.io/gs_stitching_website.