< Explain other AI papers

dInfer: An Efficient Inference Framework for Diffusion Language Models

Yuxin Ma, Lun Du, Lanning Wei, Kun Chen, Qian Xu, Kangyu Wang, Guofeng Feng, Guoshan Lu, Lin Liu, Xiaojing Qi, Xinyuan Zhang, Zhen Tao, Haibo Feng, Ziyun Jiang, Ying Xu, Zenan Huang, Yihong Zhuang, Haokai Xu, Jiaqi Hu, Zhenzhong Lan, Junbo Zhao, Jianguo Li

2025-10-15

dInfer: An Efficient Inference Framework for Diffusion Language Models

Summary

This paper introduces dInfer, a new system designed to make diffusion-based large language models (dLLMs) run much faster and more efficiently.

What's the problem?

While dLLMs are a promising new type of AI model that can generate text, they haven't become widely used because there wasn't a good, standardized way to actually *use* them quickly and efficiently. Existing methods were slow, making it hard to take advantage of their potential.

What's the solution?

The researchers created dInfer, which breaks down the process of using a dLLM into four key parts: the model itself, a manager for the diffusion process, a strategy for decoding the output, and a way to manage memory. They then improved each of these parts with new techniques and system-level optimizations, resulting in a significantly faster system.

Why it matters?

dInfer dramatically speeds up dLLMs, making them up to ten times faster than previous systems and even faster than highly optimized traditional language models. This improvement makes dLLMs more practical for real-world applications and encourages further development and use of this promising technology, and the code is publicly available for others to build upon.

Abstract

Diffusion-based large language models (dLLMs) have emerged as a promising alternative to autoregressive (AR) LLMs, leveraging denoising-based generation to enable inherent parallelism. Even more and more open-sourced dLLM models emerge, yet their widespread adoption remains constrained by the lack of a standardized and efficient inference framework. We present dInfer, an efficient and extensible framework for dLLM inference. dInfer decomposes the inference pipeline into four modular components--model, diffusion iteration manager, decoding strategy, and KV-cache manager--and integrates novel algorithms for each component alongside system-level optimizations. Through this combination of algorithmic innovations and system enhancements, dInfer achieves substantial efficiency gains without compromising output quality on LLaDA-MoE. At batch size 1, it surpasses 1,100 tokens per second on HumanEval and averages over 800 tokens per second across six benchmarks on 8times H800 GPUs. Compared to prior systems, dInfer delivers a 10times speedup over Fast-dLLM while maintaining similar model performance. Even compared to the AR model (with a comparable number of activation parameters and performance) QWen2.5-3B, which is highly optimized with the latest vLLM inference engine, dInfer still delivers a 2-3times speedup. The implementation of dInfer is open-sourced at https://github.com/inclusionAI/dInfer.