Causal Diffusion Transformers for Generative Modeling
Chaorui Deng, Deyao Zh, Kunchang Li, Shi Guan, Haoqi Fan
2024-12-17

Summary
This paper introduces Causal Diffusion Transformers, specifically a new model called CausalFusion, which combines the strengths of autoregressive (AR) models and diffusion models for generating data, particularly images. It aims to improve how these models generate sequences of information by making them more effective and flexible.
What's the problem?
Existing methods for generating data often rely on either autoregressive models or diffusion models, but they face limitations when trying to combine the two. Large models can struggle with generating sequences effectively, leading to issues in performance and flexibility. This makes it challenging to create high-quality outputs that require both types of generation.
What's the solution?
The researchers developed CausalFusion, which integrates both AR and diffusion processes into a single model. This model uses a technique called dual-factorization to handle data across different levels of noise and sequential tokens. By doing this, it allows for better generation of images and other data types while also being able to produce a variable number of outputs based on the context. The model was tested and showed state-of-the-art results on benchmarks like ImageNet, proving its effectiveness.
Why it matters?
CausalFusion is significant because it enhances the capabilities of generative models, allowing them to produce higher quality images and data more efficiently. This advancement could have broad applications in fields like computer vision, art generation, and any area where creating detailed visual content is important. By improving how models generate data, it opens up new possibilities for creativity and innovation in technology.
Abstract
We introduce Causal Diffusion as the autoregressive (AR) counterpart of Diffusion models. It is a next-token(s) forecasting framework that is friendly to both discrete and continuous modalities and compatible with existing next-token prediction models like LLaMA and GPT. While recent works attempt to combine diffusion with AR models, we show that introducing sequential factorization to a diffusion model can substantially improve its performance and enables a smooth transition between AR and diffusion generation modes. Hence, we propose CausalFusion - a decoder-only transformer that dual-factorizes data across sequential tokens and diffusion noise levels, leading to state-of-the-art results on the ImageNet generation benchmark while also enjoying the AR advantage of generating an arbitrary number of tokens for in-context reasoning. We further demonstrate CausalFusion's multimodal capabilities through a joint image generation and captioning model, and showcase CausalFusion's ability for zero-shot in-context image manipulations. We hope that this work could provide the community with a fresh perspective on training multimodal models over discrete and continuous data.