MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes
Ruijie Lu, Yixin Chen, Junfeng Ni, Baoxiong Jia, Yu Liu, Diwen Wan, Gang Zeng, Siyuan Huang
2024-12-17

Summary
This paper talks about MOVIS, a new method designed to improve how computers create realistic images of multiple objects from different viewpoints in indoor scenes.
What's the problem?
Creating images that show multiple objects from new angles is difficult because most existing methods work well only with single objects. When trying to apply these methods to scenes with many objects, they often make mistakes, like placing objects incorrectly or not showing them consistently. This leads to unrealistic images that don't look right when viewed from different perspectives.
What's the solution?
MOVIS enhances the ability of computer models to understand the structure of scenes with multiple objects. It does this by adding important features like depth information and object masks, which help the model recognize where each object is and how they relate to each other. The method also includes a special training strategy that helps the model learn to predict how objects should look from new angles while maintaining their correct placement and shapes. This results in better quality images that look consistent from various viewpoints.
Why it matters?
This research is important because it allows for more accurate and realistic image generation in applications like virtual reality, gaming, and interior design. By improving how computers synthesize images of complex scenes, MOVIS can help create more immersive experiences and tools that people can use in their everyday lives.
Abstract
Repurposing pre-trained diffusion models has been proven to be effective for NVS. However, these methods are mostly limited to a single object; directly applying such methods to compositional multi-object scenarios yields inferior results, especially incorrect object placement and inconsistent shape and appearance under novel views. How to enhance and systematically evaluate the cross-view consistency of such models remains under-explored. To address this issue, we propose MOVIS to enhance the structural awareness of the view-conditioned diffusion model for multi-object NVS in terms of model inputs, auxiliary tasks, and training strategy. First, we inject structure-aware features, including depth and object mask, into the denoising U-Net to enhance the model's comprehension of object instances and their spatial relationships. Second, we introduce an auxiliary task requiring the model to simultaneously predict novel view object masks, further improving the model's capability in differentiating and placing objects. Finally, we conduct an in-depth analysis of the diffusion sampling process and carefully devise a structure-guided timestep sampling scheduler during training, which balances the learning of global object placement and fine-grained detail recovery. To systematically evaluate the plausibility of synthesized images, we propose to assess cross-view consistency and novel view object placement alongside existing image-level NVS metrics. Extensive experiments on challenging synthetic and realistic datasets demonstrate that our method exhibits strong generalization capabilities and produces consistent novel view synthesis, highlighting its potential to guide future 3D-aware multi-object NVS tasks.