AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views
Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, Bo Dai
2025-05-30
Summary
This paper talks about AnySplat, a new AI technique that can create realistic 3D views of objects or scenes from regular pictures, even when it doesn't know where the camera was placed.
What's the problem?
The problem is that most 3D modeling methods need to know the exact position and angle of the camera for every photo, which isn't always possible, especially with random or messy photo collections. Without this information, it's really hard to build accurate 3D models or generate new views.
What's the solution?
The researchers built a feed-forward network that uses something called 3D Gaussian splatting, which lets the AI guess and fill in the missing camera details while still creating high-quality 3D images. Their design works well whether there are just a few photos or a lot, making it flexible and efficient.
Why it matters?
This is important because it means anyone can turn regular pictures into 3D scenes without needing special equipment or perfect information, which could help with things like virtual reality, gaming, or even preserving memories in a more lifelike way.
Abstract
AnySplat is a feed forward network that performs novel view synthesis without camera poses, using 3D Gaussian primitives and unified design for efficiency and quality across sparse and dense datasets.