< Explain other AI papers

Equivariant Image Modeling

Ruixiao Dong, Mengde Xu, Zigang Geng, Li Li, Han Hu, Shuyang Gu

2025-03-25

Equivariant Image Modeling

Summary

This paper is about making AI better at generating images by taking advantage of the fact that images look the same no matter where you look at them.

What's the problem?

Current AI models for generating images have trouble because they have to do many different tasks at once, which can cause problems during training and make the process less efficient.

What's the solution?

The researchers developed a new way to train AI models that focuses on making sure the model understands that an image is the same even if it's shifted around. This helps the model learn more efficiently and generate better images.

Why it matters?

This work matters because it can lead to AI models that can generate higher-quality images with less computing power, and that are better at understanding the underlying structure of images.

Abstract

Current generative models, such as autoregressive and diffusion approaches, decompose high-dimensional data distribution learning into a series of simpler subtasks. However, inherent conflicts arise during the joint optimization of these subtasks, and existing solutions fail to resolve such conflicts without sacrificing efficiency or scalability. We propose a novel equivariant image modeling framework that inherently aligns optimization targets across subtasks by leveraging the translation invariance of natural visual signals. Our method introduces (1) column-wise tokenization which enhances translational symmetry along the horizontal axis, and (2) windowed causal attention which enforces consistent contextual relationships across positions. Evaluated on class-conditioned ImageNet generation at 256x256 resolution, our approach achieves performance comparable to state-of-the-art AR models while using fewer computational resources. Systematic analysis demonstrates that enhanced equivariance reduces inter-task conflicts, significantly improving zero-shot generalization and enabling ultra-long image synthesis. This work establishes the first framework for task-aligned decomposition in generative modeling, offering insights into efficient parameter sharing and conflict-free optimization. The code and models are publicly available at https://github.com/drx-code/EquivariantModeling.