< Explain other AI papers

VLM-Guided Adaptive Negative Prompting for Creative Generation

Shelly Golan, Yotam Nitzan, Zongze Wu, Or Patashnik

2025-10-14

VLM-Guided Adaptive Negative Prompting for Creative Generation

Summary

This paper focuses on making AI image generators more creative, going beyond just accurately recreating what you ask for and instead producing genuinely new and surprising visuals.

What's the problem?

Current AI image generators, while good at making realistic images from text, often lack true creativity. They tend to stick to familiar visual ideas. Existing methods to boost creativity are either limited in what they can explore or require a lot of extra work like retraining the AI or carefully adjusting settings, making them impractical for everyday use.

What's the solution?

The researchers developed a new technique called VLM-Guided Adaptive Negative-Prompting. It works by using another AI, a vision-language model, to look at the image as it's being created and subtly nudge the generator *away* from common visual concepts. This happens in real-time, without any extra training, and helps the AI explore more unusual and imaginative ideas while still making sure the image makes sense and contains recognizable objects.

Why it matters?

This research is important because it offers a simple and effective way to unlock more creativity in AI image generation. It doesn't require a lot of computing power or technical expertise, and it works well even when you're asking for complex scenes with multiple objects. This means anyone can use it to create truly unique and original images that go beyond what's currently possible.

Abstract

Creative generation is the synthesis of new, surprising, and valuable samples that reflect user intent yet cannot be envisioned in advance. This task aims to extend human imagination, enabling the discovery of visual concepts that exist in the unexplored spaces between familiar domains. While text-to-image diffusion models excel at rendering photorealistic scenes that faithfully match user prompts, they still struggle to generate genuinely novel content. Existing approaches to enhance generative creativity either rely on interpolation of image features, which restricts exploration to predefined categories, or require time-intensive procedures such as embedding optimization or model fine-tuning. We propose VLM-Guided Adaptive Negative-Prompting, a training-free, inference-time method that promotes creative image generation while preserving the validity of the generated object. Our approach utilizes a vision-language model (VLM) that analyzes intermediate outputs of the generation process and adaptively steers it away from conventional visual concepts, encouraging the emergence of novel and surprising outputs. We evaluate creativity through both novelty and validity, using statistical metrics in the CLIP embedding space. Through extensive experiments, we show consistent gains in creative novelty with negligible computational overhead. Moreover, unlike existing methods that primarily generate single objects, our approach extends to complex scenarios, such as generating coherent sets of creative objects and preserving creativity within elaborate compositional prompts. Our method integrates seamlessly into existing diffusion pipelines, offering a practical route to producing creative outputs that venture beyond the constraints of textual descriptions.