< Explain other AI papers

LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation

Pengzhi Li, Pengfei Yu, Zide Liu, Wei He, Xuhao Pan, Xudong Rao, Tao Wei, Wei Chen

2025-02-26

LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven
  Language Representation

Summary

This paper talks about LDGen, a new way to make AI create images from text descriptions that works better across different languages and creates higher quality images

What's the problem?

Current AI systems that turn text into images struggle with languages other than English and sometimes don't create images that match the text descriptions very well. They also take a long time to train and use a lot of computer power

What's the solution?

The researchers created LDGen, which uses powerful language AI (called large language models) to understand text better. They also made special tools to help the language AI work well with the image-creation part. This new system can create images from text in many languages without needing extra training, and it makes images that match the descriptions more closely

Why it matters?

This matters because it could make text-to-image AI more useful for people who speak different languages, not just English. It also creates better quality images that match what people ask for more accurately. This could help artists, designers, and regular people create the images they want more easily, no matter what language they speak

Abstract

In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands. Traditional text encoders, such as CLIP and T5, exhibit limitations in multilingual processing, hindering image generation across diverse languages. We address these challenges by leveraging the advanced capabilities of LLMs. Our approach employs a language representation strategy that applies hierarchical caption optimization and human instruction techniques to derive precise semantic information,. Subsequently, we incorporate a lightweight adapter and a cross-modal refiner to facilitate efficient feature alignment and interaction between LLMs and image features. LDGen reduces training time and enables zero-shot multilingual image generation. Experimental results indicate that our method surpasses baseline models in both prompt adherence and image aesthetic quality, while seamlessly supporting multiple languages. Project page: https://zrealli.github.io/LDGen.