Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process
Jiayi Chen, Wenxuan Song, Pengxiang Ding, Ziyang Zhou, Han Zhao, Feilong Tang, Donglin Wang, Haoang Li
2025-11-04
Summary
This paper introduces a new type of artificial intelligence model, called Unified Diffusion VLA, that can understand instructions given in natural language, 'see' what's happening through images, and then take actions in a virtual world. It's designed to be better at coordinating these three things – understanding, imagining what comes next (generating images), and acting – all at the same time.
What's the problem?
Existing models that try to do all three things – understand language, process images, and take actions – often have trouble because they treat image generation and action planning as separate steps. Some rely on outside 'experts' to combine different types of information, which isn't ideal. The core issue is that these models don't fully leverage the connection between *what* an agent imagines happening and *what* actions it takes to make that happen.
What's the solution?
The researchers developed a model that uses a process called 'diffusion,' similar to how noise is gradually removed from a blurry image to reveal a clear picture. But instead of just images, this diffusion process works on language *and* actions simultaneously. This 'Joint Discrete Denoising Diffusion Process' (JD3P) allows the model to refine its understanding, imagine future images, and plan actions all together, in a single, coordinated step. They also created a special training method and techniques to make the model work faster and more efficiently.
Why it matters?
This research is important because it represents a step forward in creating AI agents that can interact with the world more intelligently. By jointly optimizing image generation and action prediction, the model performs better on complex tasks and does so much faster than previous methods. This could have applications in robotics, virtual assistants, and other areas where AI needs to understand and respond to its environment.
Abstract
Vision-language-action (VLA) models aim to understand natural language instructions and visual observations and to execute corresponding actions as an embodied agent. Recent work integrates future images into the understanding-acting loop, yielding unified VLAs that jointly understand, generate, and act -- reading text and images and producing future images and actions. However, these models either rely on external experts for modality unification or treat image generation and action prediction as separate processes, limiting the benefits of direct synergy between these tasks. Our core philosophy is to optimize generation and action jointly through a synchronous denoising process, where the iterative refinement enables actions to evolve from initialization, under constant and sufficient visual guidance. We ground this philosophy in our proposed Unified Diffusion VLA and Joint Discrete Denoising Diffusion Process (JD3P), which is a joint diffusion process that integrates multiple modalities into a single denoising trajectory to serve as the key mechanism enabling understanding, generation, and acting to be intrinsically synergistic. Our model and theory are built on a unified tokenized space of all modalities and a hybrid attention mechanism. We further propose a two-stage training pipeline and several inference-time techniques that optimize performance and efficiency. Our approach achieves state-of-the-art performance on benchmarks such as CALVIN, LIBERO, and SimplerEnv with 4times faster inference than autoregressive methods, and we demonstrate its effectiveness through in-depth analysis and real-world evaluations. Our project page is available at https://irpn-eai.github.io/UD-VLA.github.io/.