LoST: Level of Semantics Tokenization for 3D Shapes
Niladri Shekhar Dutt, Zifan Shi, Paul Guerrero, Chun-Hao Paul Huang, Duygu Ceylan, Niloy J. Mitra, Xuelin Chen
2026-03-19
Summary
This paper introduces a new way to break down 3D shapes into smaller pieces, called tokens, for use in artificial intelligence models that *generate* 3D objects. It focuses on making these tokens more meaningful and efficient than current methods.
What's the problem?
Currently, the best ways to tokenize 3D shapes rely on how computers handle detail levels for rendering graphics, like in video games. These methods aren't great because they use a lot of tokens without focusing on the important parts of the shape, and they don't really capture what the shape *means* – its overall form and features. This makes it hard for AI to learn and create good 3D models, especially when generating them step-by-step.
What's the solution?
The researchers developed a method called Level-of-Semantics Tokenization (LoST). LoST organizes tokens based on how important they are to the overall shape. The first tokens create a basic, recognizable version of the object, and later tokens add finer details. To train this system, they created a new way to compare the structure of 3D shapes with how a computer 'sees' their meaning, ensuring the tokens align with the object's core characteristics. This is done through something called Relational Inter-Distance Alignment (RIDA).
Why it matters?
This work is important because it significantly improves the quality and efficiency of generating 3D shapes with AI. LoST creates better 3D models with fewer tokens than previous methods, meaning it requires less computing power. It also allows for tasks like searching for 3D models based on what they *are* (their meaning) rather than just their geometry, opening up possibilities for more advanced 3D applications.
Abstract
Tokenization is a fundamental technique in the generative modeling of various modalities. In particular, it plays a critical role in autoregressive (AR) models, which have recently emerged as a compelling option for 3D generation. However, optimal tokenization of 3D shapes remains an open question. State-of-the-art (SOTA) methods primarily rely on geometric level-of-detail (LoD) hierarchies, originally designed for rendering and compression. These spatial hierarchies are often token-inefficient and lack semantic coherence for AR modeling. We propose Level-of-Semantics Tokenization (LoST), which orders tokens by semantic salience, such that early prefixes decode into complete, plausible shapes that possess principal semantics, while subsequent tokens refine instance-specific geometric and semantic details. To train LoST, we introduce Relational Inter-Distance Alignment (RIDA), a novel 3D semantic alignment loss that aligns the relational structure of the 3D shape latent space with that of the semantic DINO feature space. Experiments show that LoST achieves SOTA reconstruction, surpassing previous LoD-based 3D shape tokenizers by large margins on both geometric and semantic reconstruction metrics. Moreover, LoST achieves efficient, high-quality AR 3D generation and enables downstream tasks like semantic retrieval, while using only 0.1%-10% of the tokens needed by prior AR models.