< Explain other AI papers

LightGen: Efficient Image Generation through Knowledge Distillation and Direct Preference Optimization

Xianfeng Wu, Yajing Bai, Haoze Zheng, Harold Haodong Chen, Yexin Liu, Zihao Wang, Xuran Ma, Wen-Jie Shu, Xianzu Wu, Harry Yang, Ser-Nam Lim

2025-03-12

LightGen: Efficient Image Generation through Knowledge Distillation and
  Direct Preference Optimization

Summary

This paper talks about LightGen, a faster and cheaper way to train AI image generators by copying smart tricks from top models and fixing errors using a feedback system.

What's the problem?

Making good AI image tools usually needs huge datasets and powerful computers, which are too expensive or hard to get for most people.

What's the solution?

LightGen uses a smaller model that learns from top image generators (like copying answers from a smart friend) and improves image details through a feedback system that picks better-looking pictures.

Why it matters?

This lets more people create custom images for art, design, or education without needing expensive tech, making AI tools fairer and easier to use.

Abstract

Recent advances in text-to-image generation have primarily relied on extensive datasets and parameter-heavy architectures. These requirements severely limit accessibility for researchers and practitioners who lack substantial computational resources. In this paper, we introduce \model, an efficient training paradigm for image generation models that uses knowledge distillation (KD) and Direct Preference Optimization (DPO). Drawing inspiration from the success of data KD techniques widely adopted in Multi-Modal Large Language Models (MLLMs), LightGen distills knowledge from state-of-the-art (SOTA) text-to-image models into a compact Masked Autoregressive (MAR) architecture with only 0.7B parameters. Using a compact synthetic dataset of just 2M high-quality images generated from varied captions, we demonstrate that data diversity significantly outweighs data volume in determining model performance. This strategy dramatically reduces computational demands and reduces pre-training time from potentially thousands of GPU-days to merely 88 GPU-days. Furthermore, to address the inherent shortcomings of synthetic data, particularly poor high-frequency details and spatial inaccuracies, we integrate the DPO technique that refines image fidelity and positional accuracy. Comprehensive experiments confirm that LightGen achieves image generation quality comparable to SOTA models while significantly reducing computational resources and expanding accessibility for resource-constrained environments. Code is available at https://github.com/XianfengWu01/LightGen