Cubic Discrete Diffusion: Discrete Visual Generation on High-Dimensional Representation Tokens
Yuqing Wang, Chuofan Ma, Zhijie Lin, Yao Teng, Lijun Yu, Shuai Wang, Jiaming Han, Jiashi Feng, Yi Jiang, Xihui Liu
2026-03-20
Summary
This paper introduces a new method called Cubic Discrete Diffusion, or CubiD, for creating images from a set of coded instructions, similar to how language models generate text. It focuses on improving the quality of these instructions, making them more detailed and capable of representing complex images.
What's the problem?
Currently, methods for generating images using these coded instructions, or 'tokens', use relatively simple codes that don't capture enough detail. While more detailed codes exist, it's been difficult to use them for image generation because of the complexity of predicting all the parts of a high-dimensional code. Essentially, existing methods sacrifice image quality for the sake of being able to generate something at all.
What's the solution?
The researchers developed CubiD, which can work with these high-dimensional codes. It does this by strategically hiding parts of the code and training the model to predict the missing pieces. This 'fill-in-the-blanks' approach allows the model to learn relationships within the code and across different parts of the image, regardless of how detailed the code is. The number of steps it takes to generate the image doesn't increase as the code gets more complex, which is a big improvement.
Why it matters?
This work is important because it allows for a more unified approach to building AI systems that can understand and generate both text and images. By using a similar coding system for both, it becomes easier to create AI that can seamlessly switch between tasks like describing an image or creating one from a text prompt. It paves the way for more powerful and versatile multimodal AI.
Abstract
Visual generation with discrete tokens has gained significant attention as it enables a unified token prediction paradigm shared with language models, promising seamless multimodal architectures. However, current discrete generation methods remain limited to low-dimensional latent tokens (typically 8-32 dims), sacrificing the semantic richness essential for understanding. While high-dimensional pretrained representations (768-1024 dims) could bridge this gap, their discrete generation poses fundamental challenges. In this paper, we present Cubic Discrete Diffusion (CubiD), the first discrete generation model for high-dimensional representations. CubiD performs fine-grained masking throughout the high-dimensional discrete representation -- any dimension at any position can be masked and predicted from partial observations. This enables the model to learn rich correlations both within and across spatial positions, with the number of generation steps fixed at T regardless of feature dimensionality, where T ll hwd. On ImageNet-256, CubiD achieves state-of-the-art discrete generation with strong scaling behavior from 900M to 3.7B parameters. Crucially, we validate that these discretized tokens preserve original representation capabilities, demonstrating that the same discrete tokens can effectively serve both understanding and generation tasks. We hope this work will inspire future research toward unified multimodal architectures. Code is available at: https://github.com/YuqingWang1029/CubiD.