< Explain other AI papers

EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM

Zhuofan Zong, Dongzhi Jiang, Bingqi Ma, Guanglu Song, Hao Shao, Dazhong Shen, Yu Liu, Hongsheng Li

2024-12-13

EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM

Summary

This paper introduces EasyRef, a new method that helps AI models generate images based on multiple reference images and text prompts, improving how these models understand and create visuals.

What's the problem?

Existing methods for training AI models to generate images often struggle when trying to use multiple reference images. Traditional approaches either average the images together, which loses important details, or require specific adjustments for each set of images, making the process complicated and inefficient.

What's the solution?

EasyRef addresses these issues by using a multimodal large language model (MLLM) that can understand and process both images and text. It captures consistent visual elements from multiple images based on instructions provided. The method includes a reference aggregation strategy to reduce computational costs and a progressive training scheme to improve detail preservation. Additionally, it introduces MRBench, a benchmark for evaluating how well models can generate images using multiple references.

Why it matters?

This research is significant because it enhances the ability of AI models to generate high-quality images that accurately reflect the details from multiple sources. By improving how these models work with group references, EasyRef can lead to better applications in art, design, and any field that relies on complex image generation.

Abstract

Significant achievements in personalization of diffusion models have been witnessed. Conventional tuning-free methods mostly encode multiple reference images by averaging their image embeddings as the injection condition, but such an image-independent operation cannot perform interaction among images to capture consistent visual elements within multiple references. Although the tuning-based Low-Rank Adaptation (LoRA) can effectively extract consistent elements within multiple images through the training process, it necessitates specific finetuning for each distinct image group. This paper introduces EasyRef, a novel plug-and-play adaptation method that enables diffusion models to be conditioned on multiple reference images and the text prompt. To effectively exploit consistent visual elements within multiple images, we leverage the multi-image comprehension and instruction-following capabilities of the multimodal large language model (MLLM), prompting it to capture consistent visual elements based on the instruction. Besides, injecting the MLLM's representations into the diffusion process through adapters can easily generalize to unseen domains, mining the consistent visual elements within unseen data. To mitigate computational costs and enhance fine-grained detail preservation, we introduce an efficient reference aggregation strategy and a progressive training scheme. Finally, we introduce MRBench, a new multi-reference image generation benchmark. Experimental results demonstrate EasyRef surpasses both tuning-free methods like IP-Adapter and tuning-based methods like LoRA, achieving superior aesthetic quality and robust zero-shot generalization across diverse domains.