< Explain other AI papers

DressRecon: Freeform 4D Human Reconstruction from Monocular Video

Jeff Tan, Donglai Xiang, Shubham Tulsiani, Deva Ramanan, Gengshan Yang

2024-10-02

DressRecon: Freeform 4D Human Reconstruction from Monocular Video

Summary

This paper discusses DressRecon, a method for creating detailed 4D models of humans in loose clothing from single videos, allowing for realistic animations and interactions.

What's the problem?

Previous methods for reconstructing human figures from videos often worked only with tight clothing and required multiple camera angles or expensive scans, which are hard to collect. This makes it difficult to capture realistic movements and interactions when people are wearing loose clothes or holding objects.

What's the solution?

DressRecon combines general knowledge about human body shapes with specific video data to create flexible models. It uses a neural network that separates the movements of the body from the clothing, allowing it to accurately represent how loose clothing moves. The method also incorporates information from the video, like body pose and motion, to ensure that the final model looks realistic and consistent over time. This allows for high-quality 3D reconstructions that can be animated smoothly.

Why it matters?

This research is important because it improves how we can create realistic digital humans for applications in animation, gaming, and virtual reality. By enabling accurate representations of people in various clothing styles, DressRecon helps enhance the quality of visual content and makes it easier to create interactive experiences.

Abstract

We present a method to reconstruct time-consistent human body models from monocular videos, focusing on extremely loose clothing or handheld object interactions. Prior work in human reconstruction is either limited to tight clothing with no object interactions, or requires calibrated multi-view captures or personalized template scans which are costly to collect at scale. Our key insight for high-quality yet flexible reconstruction is the careful combination of generic human priors about articulated body shape (learned from large-scale training data) with video-specific articulated "bag-of-bones" deformation (fit to a single video via test-time optimization). We accomplish this by learning a neural implicit model that disentangles body versus clothing deformations as separate motion model layers. To capture subtle geometry of clothing, we leverage image-based priors such as human body pose, surface normals, and optical flow during optimization. The resulting neural fields can be extracted into time-consistent meshes, or further optimized as explicit 3D Gaussians for high-fidelity interactive rendering. On datasets with highly challenging clothing deformations and object interactions, DressRecon yields higher-fidelity 3D reconstructions than prior art. Project page: https://jefftan969.github.io/dressrecon/