Improving Long-Text Alignment for Text-to-Image Diffusion Models
Luping Liu, Chao Du, Tianyu Pang, Zehan Wang, Chongxuan Li, Dong Xu
2024-10-17

Summary
This paper introduces LongAlign, a new method designed to improve how text-to-image (T2I) models generate images from long text descriptions by enhancing their ability to align the generated images with the provided texts.
What's the problem?
As text inputs become longer and more complex, existing methods for processing these texts, like CLIP, struggle to effectively match the generated images with the detailed descriptions. This makes it difficult for AI models to create accurate images that reflect the full meaning of longer texts.
What's the solution?
To solve this problem, the authors developed LongAlign, which uses two main strategies: first, it breaks long texts into smaller segments so each part can be processed separately, overcoming limitations on input length. Second, it improves the training of the model by separating preference scores into two parts: one that measures how well the image matches the text and another that looks at other visual qualities. By adjusting how much importance is given to these two parts during training, they reduce errors and improve alignment between text and images.
Why it matters?
This research is important because it enhances the capabilities of text-to-image models, allowing them to generate more accurate images from detailed and lengthy descriptions. This advancement can lead to better applications in areas like digital art, advertising, and content creation, where precise visual representation of complex ideas is crucial.
Abstract
The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To tackle these issues, we propose LongAlign, which includes a segment-level encoding method for processing long texts and a decomposed preference optimization method for effective alignment training. For segment-level encoding, long texts are divided into multiple segments and processed separately. This method overcomes the maximum input length limits of pretrained encoding models. For preference optimization, we provide decomposed CLIP-based preference models to fine-tune diffusion models. Specifically, to utilize CLIP-based preference models for T2I alignment, we delve into their scoring mechanisms and find that the preference scores can be decomposed into two components: a text-relevant part that measures T2I alignment and a text-irrelevant part that assesses other visual aspects of human preference. Additionally, we find that the text-irrelevant part contributes to a common overfitting problem during fine-tuning. To address this, we propose a reweighting strategy that assigns different weights to these two components, thereby reducing overfitting and enhancing alignment. After fine-tuning 512 times 512 Stable Diffusion (SD) v1.5 for about 20 hours using our method, the fine-tuned SD outperforms stronger foundation models in T2I alignment, such as PixArt-alpha and Kandinsky v2.2. The code is available at https://github.com/luping-liu/LongAlign.