< Explain other AI papers

Negative Token Merging: Image-based Adversarial Feature Guidance

Jaskirat Singh, Lindsey Li, Weijia Shi, Ranjay Krishna, Yejin Choi, Pang Wei Koh, Michael F. Cohen, Stephen Gould, Liang Zheng, Luke Zettlemoyer

2024-12-06

Negative Token Merging: Image-based Adversarial Feature Guidance

Summary

This paper talks about Negative Token Merging (NegToMe), a new method that helps improve image generation by using visual features from reference images to guide the creation process, making it easier to avoid unwanted elements like copyrighted characters.

What's the problem?

Traditional methods for guiding image generation often rely on text descriptions to steer the output away from undesired concepts. However, using text alone can be insufficient for capturing complex visual ideas or avoiding specific elements, such as characters that are copyrighted.

What's the solution?

The authors introduced NegToMe, which allows for adversarial guidance by directly using visual features from reference images instead of relying solely on text prompts. This method pushes the generated image features away from those in the reference image during the creation process. This way, it increases diversity in the generated images and reduces similarity to copyrighted content by a significant amount. NegToMe is easy to implement and works well with different image generation models.

Why it matters?

This research is important because it enhances the ability of image generation systems to create diverse and unique outputs while avoiding legal issues related to copyright. By improving how these systems understand and utilize visual information, NegToMe can lead to more creative applications in art, design, and entertainment.

Abstract

Text-based adversarial guidance using a negative prompt has emerged as a widely adopted approach to push the output features away from undesired concepts. While useful, performing adversarial guidance using text alone can be insufficient to capture complex visual concepts and avoid undesired visual elements like copyrighted characters. In this paper, for the first time we explore an alternate modality in this direction by performing adversarial guidance directly using visual features from a reference image or other images in a batch. In particular, we introduce negative token merging (NegToMe), a simple but effective training-free approach which performs adversarial guidance by selectively pushing apart matching semantic features (between reference and output generation) during the reverse diffusion process. When used w.r.t. other images in the same batch, we observe that NegToMe significantly increases output diversity (racial, gender, visual) without sacrificing output image quality. Similarly, when used w.r.t. a reference copyrighted asset, NegToMe helps reduce visual similarity with copyrighted content by 34.57%. NegToMe is simple to implement using just few-lines of code, uses only marginally higher (<4%) inference times and generalizes to different diffusion architectures like Flux, which do not natively support the use of a separate negative prompt. Code is available at https://negtome.github.io