Color Me Correctly: Bridging Perceptual Color Spaces and Text Embeddings for Improved Diffusion Generation
Sung-Lin Tsai, Bo-Lun Huang, Yu Ting Shen, Cheng Yu Yeo, Chiang Tseng, Bo-Kai Ruan, Wen-Sheng Lien, Hong-Han Shuai
2025-09-15
Summary
This paper addresses the difficulty that AI image generators have with accurately creating images based on specific color requests, especially when those requests use complex or ambiguous color names.
What's the problem?
Current AI image generators, called diffusion models, often get colors wrong when given prompts with detailed color descriptions like 'Tiffany blue' or 'lime green'. They struggle to understand what the user *really* means by those colors, leading to images that don't match the intended look. Previous attempts to fix this involved complicated methods like changing how the AI pays attention to different parts of the prompt, using example images, or retraining the AI, but none of these consistently solve the problem of understanding ambiguous color terms.
What's the solution?
The researchers developed a new method that doesn't require any extra training of the AI. It uses a powerful language model, similar to the one powering chatbots, to clarify the color terms in the prompt. This language model figures out exactly what color is being asked for. Then, the system adjusts the way the AI understands the prompt, focusing on the relationships between the colors in a standard color space, to ensure the generated image has the correct colors. It works directly with the prompt's internal representation, refining it for better color accuracy.
Why it matters?
This research is important because accurate color representation is crucial for many real-world applications like designing clothes, visualizing products, or planning interior spaces. By improving the AI's ability to understand and generate specific colors, this work makes these applications more practical and reliable, allowing for more precise and visually appealing results without needing complex setups or retraining.
Abstract
Accurate color alignment in text-to-image (T2I) generation is critical for applications such as fashion, product visualization, and interior design, yet current diffusion models struggle with nuanced and compound color terms (e.g., Tiffany blue, lime green, hot pink), often producing images that are misaligned with human intent. Existing approaches rely on cross-attention manipulation, reference images, or fine-tuning but fail to systematically resolve ambiguous color descriptions. To precisely render colors under prompt ambiguity, we propose a training-free framework that enhances color fidelity by leveraging a large language model (LLM) to disambiguate color-related prompts and guiding color blending operations directly in the text embedding space. Our method first employs a large language model (LLM) to resolve ambiguous color terms in the text prompt, and then refines the text embeddings based on the spatial relationships of the resulting color terms in the CIELAB color space. Unlike prior methods, our approach improves color accuracy without requiring additional training or external reference images. Experimental results demonstrate that our framework improves color alignment without compromising image quality, bridging the gap between text semantics and visual generation.