< Explain other AI papers

Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies

Zhixuan Liang, Yizhuo Li, Tianshuo Yang, Chengyue Wu, Sitong Mao, Liuao Pei, Xiaokang Yang, Jiangmiao Pang, Yao Mu, Ping Luo

2025-08-28

Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies

Summary

This paper introduces a new way to get robots to follow instructions based on what they see, using a system called Discrete Diffusion VLA.

What's the problem?

Currently, getting robots to act on instructions and images is tricky because the methods used to translate that information into robot movements are either too rigid, forcing actions to happen in a specific order, or they require complicated training and a lot of trial-and-error to get right. These existing methods don't easily fit with the existing technology that understands both images and language, making it hard to build a single, efficient system.

What's the solution?

The researchers developed a new 'decoder' for these vision-language-action models. Instead of generating actions step-by-step or using complex continuous processes, they break down actions into smaller, manageable chunks and use a technique called 'discrete diffusion'. This is like gradually refining a blurry image into a clear one, but applied to the sequence of actions a robot needs to take. Importantly, this new decoder can be trained in the same way as the part of the system that understands images and language, making everything work together seamlessly and allowing for faster action planning.

Why it matters?

This work is important because it creates a more efficient and effective way to control robots with instructions and visual input. By simplifying the process and making it compatible with existing technology, it paves the way for building more sophisticated robots that can handle complex tasks and learn from larger amounts of data. The results show significant improvements in robot performance on various tasks, suggesting this approach is a promising step towards more capable and adaptable robots.

Abstract

Vision-Language-Action (VLA) models adapt large vision-language backbones to map images and instructions to robot actions. However, prevailing VLA decoders either generate actions autoregressively in a fixed left-to-right order or attach continuous diffusion or flow matching heads outside the backbone, demanding specialized training and iterative sampling that hinder a unified, scalable architecture. We present Discrete Diffusion VLA, a single-transformer policy that models discretized action chunks with discrete diffusion and is trained with the same cross-entropy objective as the VLM backbone. The design retains diffusion's progressive refinement paradigm while remaining natively compatible with the discrete token interface of VLMs. Our method achieves an adaptive decoding order that resolves easy action elements before harder ones and uses secondary remasking to revisit uncertain predictions across refinement rounds, which improves consistency and enables robust error correction. This unified decoder preserves pretrained vision language priors, supports parallel decoding, breaks the autoregressive bottleneck, and reduces the number of function evaluations. Discrete Diffusion VLA achieves 96.3% avg. SR on LIBERO, 71.2% visual matching on SimplerEnv Fractal and 49.3% overall on SimplerEnv Bridge, improving over both autoregressive and continuous diffusion baselines. These findings indicate that discrete-diffusion action decoder supports precise action modeling and consistent training, laying groundwork for scaling VLA to larger models and datasets.