< Explain other AI papers

ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance

Chunwei Wang, Guansong Lu, Junwei Yang, Runhui Huang, Jianhua Han, Lu Hou, Wei Zhang, Hang Xu

2024-12-11

ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance

Summary

This paper talks about ILLUME, a new multimodal large language model (MLLM) that combines understanding and generating both text and images in a single framework.

What's the problem?

Creating models that can effectively understand and generate content from both text and images typically requires a lot of data and complex training processes. Existing models often need large datasets, which can be time-consuming and resource-intensive, making them less efficient and harder to use.

What's the solution?

The authors introduce ILLUME, which uses a vision tokenizer to efficiently process and align images with text, reducing the amount of data needed for training to just 15 million examples—over four times less than what is usually required. ILLUME also features a self-enhancing alignment scheme that helps the model check if the text descriptions match the images it generates, leading to better accuracy in understanding and producing content. This approach allows ILLUME to perform well across various tasks involving both text and images.

Why it matters?

This research is important because it advances the capabilities of AI models in handling multiple types of data, making them more versatile for applications like image generation, editing, and understanding complex visual content. By improving efficiency and performance, ILLUME paves the way for more powerful AI tools that can be used in creative fields, education, and many other areas.

Abstract

In this paper, we introduce ILLUME, a unified multimodal large language model (MLLM) that seamlessly integrates multimodal understanding and generation capabilities within a single large language model through a unified next-token prediction formulation. To address the large dataset size typically required for image-text alignment, we propose to enhance data efficiency through the design of a vision tokenizer that incorporates semantic information and a progressive multi-stage training procedure. This approach reduces the dataset size to just 15M for pretraining -- over four times fewer than what is typically needed -- while achieving competitive or even superior performance with existing unified MLLMs, such as Janus. Additionally, to promote synergistic enhancement between understanding and generation capabilities, which is under-explored in previous works, we introduce a novel self-enhancing multimodal alignment scheme. This scheme supervises the MLLM to self-assess the consistency between text descriptions and self-generated images, facilitating the model to interpret images more accurately and avoid unrealistic and incorrect predictions caused by misalignment in image generation. Based on extensive experiments, our proposed ILLUME stands out and competes with state-of-the-art unified MLLMs and specialized models across various benchmarks for multimodal understanding, generation, and editing.