< Explain other AI papers

4DGS360: 360° Gaussian Reconstruction of Dynamic Objects from a Single Video

Jae Won Jang, Yeonjin Chang, Wonsik Shin, Juhwan Cho, Nojun Kwak

2026-03-26

4DGS360: 360° Gaussian Reconstruction of Dynamic Objects from a Single Video

Summary

This paper introduces a new method, 4DGS360, for creating complete 3D models of moving objects from regular video taken with a single camera, allowing you to see the object from all angles.

What's the problem?

Existing methods for building 3D models from video often struggle to create accurate and consistent 360-degree views of an object because they rely too much on what's visible in each individual camera angle, leading to distortions and inaccuracies, especially in areas hidden from view. Essentially, the models 'overfit' to the visible parts and don't handle the unseen parts well.

What's the solution?

The researchers solved this by starting with a better initial 3D guess of the object's shape. They developed a '3D tracker' called AnchorTAP3D that uses reliable points visible in the video to create stable 3D paths for different parts of the object. This helps to fill in the missing information in hidden areas and create a more complete and accurate 3D reconstruction. They then refine this initial guess through optimization techniques.

Why it matters?

This work is important because it allows for more realistic and accurate 3D reconstructions of objects from everyday videos. They also created a new, more challenging dataset called iPhone360 to test these kinds of methods, pushing the field forward and enabling better 3D modeling from video, which has applications in areas like virtual reality, robotics, and special effects.

Abstract

We introduce 4DGS360, a diffusion-free framework for 360^{circ} dynamic object reconstruction from casual monocular video. Existing methods often fail to reconstruct consistent 360^{circ} geometry, as their heavy reliance on 2D-native priors causes initial points to overfit to visible surface in each training view. 4DGS360 addresses this challenge through a advanced 3D-native initialization that mitigates the geometric ambiguity of occluded regions. Our proposed 3D tracker, AnchorTAP3D, produces reinforced 3D point trajectories by leveraging confident 2D track points as anchors, suppressing drift and providing reliable initialization that preserves geometry in occluded regions. This initialization, combined with optimization, yields coherent 360^{circ} 4D reconstructions. We further present iPhone360, a new benchmark where test cameras are placed up to 135^{circ} apart from training views, enabling 360^{circ} evaluation that existing datasets cannot provide. Experiments show that 4DGS360 achieves state-of-the-art performance on the iPhone360, iPhone, and DAVIS datasets, both qualitatively and quantitatively.