CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization
Feize Wu, Yun Pang, Junyi Zhang, Lianyu Pang, Jian Yin, Baoquan Zhao, Qing Li, Xudong Mao
2024-09-02

Summary
This paper talks about CoRe, a new method for improving how text-to-image systems create personalized images based on user prompts while maintaining the original identity of the subject.
What's the problem?
Current text-to-image models often struggle to keep the identity of a person or object while also accurately representing what the user describes in their prompt. This can lead to images that don’t reflect the intended subject or context.
What's the solution?
The authors introduce Context Regularization (CoRe), which helps the model better understand and integrate new concepts into its existing knowledge. By focusing on how the new concept interacts with its surrounding context, CoRe allows the model to learn more effectively. This method can be applied to any prompt without needing to generate corresponding images first, making it versatile and efficient.
Why it matters?
This research is important because it enhances the ability of AI systems to create personalized images that accurately reflect user intentions. This can improve applications in fields like digital art, advertising, and social media, where high-quality, contextually relevant images are crucial.
Abstract
Recent advances in text-to-image personalization have enabled high-quality and controllable image synthesis for user-provided concepts. However, existing methods still struggle to balance identity preservation with text alignment. Our approach is based on the fact that generating prompt-aligned images requires a precise semantic understanding of the prompt, which involves accurately processing the interactions between the new concept and its surrounding context tokens within the CLIP text encoder. To address this, we aim to embed the new concept properly into the input embedding space of the text encoder, allowing for seamless integration with existing tokens. We introduce Context Regularization (CoRe), which enhances the learning of the new concept's text embedding by regularizing its context tokens in the prompt. This is based on the insight that appropriate output vectors of the text encoder for the context tokens can only be achieved if the new concept's text embedding is correctly learned. CoRe can be applied to arbitrary prompts without requiring the generation of corresponding images, thus improving the generalization of the learned text embedding. Additionally, CoRe can serve as a test-time optimization technique to further enhance the generations for specific prompts. Comprehensive experiments demonstrate that our method outperforms several baseline methods in both identity preservation and text alignment. Code will be made publicly available.