< Explain other AI papers

MeshSplatting: Differentiable Rendering with Opaque Meshes

Jan Held, Sanghyun Son, Renaud Vandeghen, Daniel Rebain, Matheus Gadelha, Yi Zhou, Anthony Cioppa, Ming C. Lin, Marc Van Droogenbroeck, Andrea Tagliasacchi

2025-12-15

MeshSplatting: Differentiable Rendering with Opaque Meshes

Summary

This paper introduces a new method called MeshSplatting, which creates 3D models from images that can be easily used in applications like augmented reality, virtual reality, and video games.

What's the problem?

Recent techniques like 3D Gaussian Splatting are great at creating realistic views of objects from different angles, but they represent things as a collection of points. This point-cloud format doesn't work well with the standard mesh-based systems used in most 3D engines, making it difficult to directly use these results in games or AR/VR apps.

What's the solution?

MeshSplatting tackles this by directly creating a mesh – a surface made of connected triangles – while also figuring out how the surface looks. It uses a smart way to connect the triangles (Delaunay triangulation) and makes sure the surface is smooth and consistent. The whole process is designed to work with differentiable rendering, meaning the system can learn and improve automatically.

Why it matters?

This work is important because it closes the gap between advanced neural rendering techniques and traditional 3D graphics. MeshSplatting creates better quality meshes than previous methods, trains faster, and uses less memory, allowing for real-time interaction with these scenes in games and other 3D applications. It makes it easier to bring these realistically rendered scenes into the interactive world.

Abstract

Primitive-based splatting methods like 3D Gaussian Splatting have revolutionized novel view synthesis with real-time rendering. However, their point-based representations remain incompatible with mesh-based pipelines that power AR/VR and game engines. We present MeshSplatting, a mesh-based reconstruction approach that jointly optimizes geometry and appearance through differentiable rendering. By enforcing connectivity via restricted Delaunay triangulation and refining surface consistency, MeshSplatting creates end-to-end smooth, visually high-quality meshes that render efficiently in real-time 3D engines. On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis, while training 2x faster and using 2x less memory, bridging neural rendering and interactive 3D graphics for seamless real-time scene interaction. The project page is available at https://meshsplatting.github.io/.