< Explain other AI papers

ControlAR: Controllable Image Generation with Autoregressive Models

Zongming Li, Tianheng Cheng, Shoufa Chen, Peize Sun, Haocheng Shen, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang

2024-10-09

ControlAR: Controllable Image Generation with Autoregressive Models

Summary

This paper introduces ControlAR, a new method for generating images using autoregressive models that allows users to have more control over the generated images by integrating spatial information.

What's the problem?

Previous autoregressive models for image generation struggled to incorporate user-defined controls effectively, which limited their ability to produce images that matched specific requirements. This lack of control resulted in lower quality outputs compared to other methods like ControlNet.

What's the solution?

The authors developed ControlAR, which uses a lightweight control encoder to transform spatial inputs (like edges or depth maps) into control tokens. These tokens are then used in a conditional decoding process that generates images based on both the control and image tokens. This method allows for better integration of user controls and enables the generation of images at any resolution without compromising quality.

Why it matters?

This research is significant because it enhances the capabilities of autoregressive models in generating high-quality images while allowing for precise control over various aspects of the output. By improving how these models can be guided, ControlAR opens up new possibilities for applications in art, design, and any field where customized image generation is needed.

Abstract

Autoregressive (AR) models have reformulated image generation as next-token prediction, demonstrating remarkable potential and emerging as strong competitors to diffusion models. However, control-to-image generation, akin to ControlNet, remains largely unexplored within AR models. Although a natural approach, inspired by advancements in Large Language Models, is to tokenize control images into tokens and prefill them into the autoregressive model before decoding image tokens, it still falls short in generation quality compared to ControlNet and suffers from inefficiency. To this end, we introduce ControlAR, an efficient and effective framework for integrating spatial controls into autoregressive image generation models. Firstly, we explore control encoding for AR models and propose a lightweight control encoder to transform spatial inputs (e.g., canny edges or depth maps) into control tokens. Then ControlAR exploits the conditional decoding method to generate the next image token conditioned on the per-token fusion between control and image tokens, similar to positional encodings. Compared to prefilling tokens, using conditional decoding significantly strengthens the control capability of AR models but also maintains the model's efficiency. Furthermore, the proposed ControlAR surprisingly empowers AR models with arbitrary-resolution image generation via conditional decoding and specific controls. Extensive experiments can demonstrate the controllability of the proposed ControlAR for the autoregressive control-to-image generation across diverse inputs, including edges, depths, and segmentation masks. Furthermore, both quantitative and qualitative results indicate that ControlAR surpasses previous state-of-the-art controllable diffusion models, e.g., ControlNet++. Code, models, and demo will soon be available at https://github.com/hustvl/ControlAR.