< Explain other AI papers

DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation

Jiatao Gu, Yuyang Wang, Yizhe Zhang, Qihang Zhang, Dinghuai Zhang, Navdeep Jaitly, Josh Susskind, Shuangfei Zhai

2024-10-13

DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation

Summary

This paper presents DART, a new model that combines two powerful techniques—autoregressive modeling and diffusion modeling—to generate high-quality images from text descriptions more efficiently.

What's the problem?

Generating images from text is a complex task that requires models to understand the text and create corresponding visuals. Traditional methods often rely on diffusion models, which gradually refine images from noise, but these methods can be inefficient. They also have limitations due to their Markovian nature, which means they don't fully utilize the information available during the generation process.

What's the solution?

To improve this process, the authors developed DART, which uses a transformer-based approach that integrates autoregressive and diffusion techniques in a non-Markovian framework. This means that DART can learn from the entire generation process rather than just focusing on immediate steps. It denoises image patches iteratively, allowing for better image quality without needing to quantize images. DART can handle both text and image data together, making it versatile for different tasks. The results showed that DART performs well on various benchmarks for generating images based on text prompts.

Why it matters?

This research is significant because it sets a new standard for generating images from text, making the process faster and more efficient while producing high-quality results. DART's ability to scale up for larger images and its integration of multiple data types could lead to advancements in fields like graphic design, game development, and any area where visual content creation is important.

Abstract

Diffusion models have become the dominant approach for visual generation. They are trained by denoising a Markovian process that gradually adds noise to the input. We argue that the Markovian property limits the models ability to fully utilize the generation trajectory, leading to inefficiencies during training and inference. In this paper, we propose DART, a transformer-based model that unifies autoregressive (AR) and diffusion within a non-Markovian framework. DART iteratively denoises image patches spatially and spectrally using an AR model with the same architecture as standard language models. DART does not rely on image quantization, enabling more effective image modeling while maintaining flexibility. Furthermore, DART seamlessly trains with both text and image data in a unified model. Our approach demonstrates competitive performance on class-conditioned and text-to-image generation tasks, offering a scalable, efficient alternative to traditional diffusion models. Through this unified framework, DART sets a new benchmark for scalable, high-quality image synthesis.