Instant4D: 4D Gaussian Splatting in Minutes
Zhanpeng Luo, Haoxi Ran, Li Lu
2025-10-13
Summary
This paper introduces Instant4D, a new system for quickly creating 3D models of scenes from regular videos, like those you'd take with your phone.
What's the problem?
Currently, building 3D models from videos is slow and difficult, especially if you don't have special equipment like multiple cameras or depth sensors. Existing methods require a lot of computing time and struggle with everyday, unedited videos because they need to figure out the camera's position and the scene's geometry at the same time.
What's the solution?
Instant4D solves this by first using a technique called deep visual SLAM to estimate the camera's movement and the scene's basic shape. Then, it simplifies the 3D representation by removing unnecessary details, making the model much smaller and faster to work with. Finally, it uses a new way to represent the scene over time, called 4D Gaussian representation, which speeds up the process dramatically, allowing a video to be reconstructed in about 10 minutes.
Why it matters?
This work is important because it makes 3D reconstruction accessible without expensive equipment or long processing times. It opens the door to creating 3D models from everyday videos, which could be useful for things like virtual reality, augmented reality, or simply preserving memories in a new way.
Abstract
Dynamic view synthesis has seen significant advances, yet reconstructing scenes from uncalibrated, casual video remains challenging due to slow optimization and complex parameter estimation. In this work, we present Instant4D, a monocular reconstruction system that leverages native 4D representation to efficiently process casual video sequences within minutes, without calibrated cameras or depth sensors. Our method begins with geometric recovery through deep visual SLAM, followed by grid pruning to optimize scene representation. Our design significantly reduces redundancy while maintaining geometric integrity, cutting model size to under 10% of its original footprint. To handle temporal dynamics efficiently, we introduce a streamlined 4D Gaussian representation, achieving a 30x speed-up and reducing training time to within two minutes, while maintaining competitive performance across several benchmarks. Our method reconstruct a single video within 10 minutes on the Dycheck dataset or for a typical 200-frame video. We further apply our model to in-the-wild videos, showcasing its generalizability. Our project website is published at https://instant4d.github.io/.