< Explain other AI papers

Negative Token Merging: Image-based Adversarial Feature Guidance

Jaskirat Singh, Lindsey Li, Weijia Shi, Ranjay Krishna, Yejin Choi, Pang Wei Koh, Michael F. Cohen, Stephen Gould, Liang Zheng, Luke Zettlemoyer

2024-12-06

Negative Token Merging: Image-based Adversarial Feature Guidance

Summary

This paper introduces Negative Token Merging (NegToMe), a new method that improves image generation by using visual features from reference images to guide the creation process and avoid unwanted elements.

What's the problem?

Current methods for guiding image generation often rely on text prompts to steer the output away from undesired concepts, like copyrighted characters. However, using text alone can be insufficient for capturing complex visual ideas and ensuring the generated images don't resemble protected content.

What's the solution?

NegToMe addresses this issue by allowing the model to use visual features from reference images instead of just text prompts. This method works by selectively pushing apart similar features between the generated image and the reference image during the image creation process. By doing this, NegToMe enhances the diversity of generated images (like racial and gender representation) and significantly reduces similarity to copyrighted images by over 34%. It is easy to implement with only a few lines of code and works with various diffusion models without needing extensive training.

Why it matters?

This research is important because it provides a more effective way to generate images that are both diverse and compliant with copyright laws. By improving how models understand and manipulate visual information, NegToMe can lead to better quality in creative applications like art generation, advertising, and game design, while also protecting intellectual property.

Abstract

Text-based adversarial guidance using a negative prompt has emerged as a widely adopted approach to push the output features away from undesired concepts. While useful, performing adversarial guidance using text alone can be insufficient to capture complex visual concepts and avoid undesired visual elements like copyrighted characters. In this paper, for the first time we explore an alternate modality in this direction by performing adversarial guidance directly using visual features from a reference image or other images in a batch. In particular, we introduce negative token merging (NegToMe), a simple but effective training-free approach which performs adversarial guidance by selectively pushing apart matching semantic features (between reference and output generation) during the reverse diffusion process. When used w.r.t. other images in the same batch, we observe that NegToMe significantly increases output diversity (racial, gender, visual) without sacrificing output image quality. Similarly, when used w.r.t. a reference copyrighted asset, NegToMe helps reduce visual similarity with copyrighted content by 34.57%. NegToMe is simple to implement using just few-lines of code, uses only marginally higher (<4%) inference times and generalizes to different diffusion architectures like Flux, which do not natively support the use of a separate negative prompt. Code is available at https://negtome.github.io