FastMesh:Efficient Artistic Mesh Generation via Component Decoupling
Jeonghwan Kim, Yushi Lan, Armando Fortes, Yongwei Chen, Xingang Pan
2025-08-27
Summary
This paper presents a new way to quickly create detailed 3D models, specifically focusing on how to build the mesh – the network of triangles that make up the surface of the model.
What's the problem?
Current methods for generating 3D meshes involve breaking them down into a sequence of instructions, like building with LEGOs. However, these instructions often repeat information about the corners (vertices) of the triangles because those corners are used multiple times. This repetition makes the instructions very long and the process of building the model slow and inefficient.
What's the solution?
The researchers tackled this by separating how the corners and the faces are created. First, they use a system that only generates the corners, which drastically reduces the amount of information needed. Then, they use another system to figure out how those corners connect to form the faces of the mesh all at once. Finally, they added steps to make the corners look more natural and to clean up any unwanted connections between the faces.
Why it matters?
This new approach is much faster – over eight times faster – than existing methods while also creating higher quality 3D models. This is important because it could speed up the creation of 3D content for things like video games, movies, and design, making it easier and quicker to bring virtual objects to life.
Abstract
Recent mesh generation approaches typically tokenize triangle meshes into sequences of tokens and train autoregressive models to generate these tokens sequentially. Despite substantial progress, such token sequences inevitably reuse vertices multiple times to fully represent manifold meshes, as each vertex is shared by multiple faces. This redundancy leads to excessively long token sequences and inefficient generation processes. In this paper, we propose an efficient framework that generates artistic meshes by treating vertices and faces separately, significantly reducing redundancy. We employ an autoregressive model solely for vertex generation, decreasing the token count to approximately 23\% of that required by the most compact existing tokenizer. Next, we leverage a bidirectional transformer to complete the mesh in a single step by capturing inter-vertex relationships and constructing the adjacency matrix that defines the mesh faces. To further improve the generation quality, we introduce a fidelity enhancer to refine vertex positioning into more natural arrangements and propose a post-processing framework to remove undesirable edge connections. Experimental results show that our method achieves more than 8times faster speed on mesh generation compared to state-of-the-art approaches, while producing higher mesh quality.