< Explain other AI papers

Seedream 2.0: A Native Chinese-English Bilingual Image Generation Foundation Model

Lixue Gong, Xiaoxia Hou, Fanshi Li, Liang Li, Xiaochen Lian, Fei Liu, Liyang Liu, Wei Liu, Wei Lu, Yichun Shi, Shiqi Sun, Yu Tian, Zhi Tian, Peng Wang, Xun Wang, Ye Wang, Guofeng Wu, Jie Wu, Xin Xia, Xuefeng Xiao, Linjie Yang, Zhonghua Zhai

2025-03-12

Seedream 2.0: A Native Chinese-English Bilingual Image Generation
  Foundation Model

Summary

This paper talks about Seedream 2.0, an AI image generator that creates pictures from Chinese or English text prompts while accurately showing cultural details like traditional clothing or text in both languages.

What's the problem?

Existing AI image tools struggle to generate Chinese text correctly or show cultural details properly, often mixing up styles or failing to render characters accurately.

What's the solution?

Seedream 2.0 uses a special bilingual AI model to understand both languages deeply, a text-rendering tool for clear Chinese/English characters, and multiple training phases to make images look natural and culturally accurate.

Why it matters?

This helps designers and creators make culturally authentic art, posters, or ads in both languages without errors, bridging gaps between Chinese/English content creation.

Abstract

Rapid advancement of diffusion models has catalyzed remarkable progress in the field of image generation. However, prevalent models such as Flux, SD3.5 and Midjourney, still grapple with issues like model bias, limited text rendering capabilities, and insufficient understanding of Chinese cultural nuances. To address these limitations, we present Seedream 2.0, a native Chinese-English bilingual image generation foundation model that excels across diverse dimensions, which adeptly manages text prompt in both Chinese and English, supporting bilingual image generation and text rendering. We develop a powerful data system that facilitates knowledge integration, and a caption system that balances the accuracy and richness for image description. Particularly, Seedream is integrated with a self-developed bilingual large language model as a text encoder, allowing it to learn native knowledge directly from massive data. This enable it to generate high-fidelity images with accurate cultural nuances and aesthetic expressions described in either Chinese or English. Beside, Glyph-Aligned ByT5 is applied for flexible character-level text rendering, while a Scaled ROPE generalizes well to untrained resolutions. Multi-phase post-training optimizations, including SFT and RLHF iterations, further improve the overall capability. Through extensive experimentation, we demonstrate that Seedream 2.0 achieves state-of-the-art performance across multiple aspects, including prompt-following, aesthetics, text rendering, and structural correctness. Furthermore, Seedream 2.0 has been optimized through multiple RLHF iterations to closely align its output with human preferences, as revealed by its outstanding ELO score. In addition, it can be readily adapted to an instruction-based image editing model, such as SeedEdit, with strong editing capability that balances instruction-following and image consistency.