DiffCLIP: Differential Attention Meets CLIP
Hasan Abed Al Kader Hammoud, Bernard Ghanem
2025-03-11
Summary
This paper talks about DiffCLIP, an upgraded version of the CLIP AI model that uses a special attention trick to focus on important details in images and text while ignoring distractions, like how noise-canceling headphones block background noise.
What's the problem?
Existing AI models like CLIP sometimes get confused by irrelevant details in images or text, making them less accurate at matching the right pictures to words.
What's the solution?
DiffCLIP adds a 'difference-checking' step where it compares two attention maps (like focusing on different parts of the image/text) and uses the difference to ignore useless info, helping it focus better on what matters.
Why it matters?
This makes AI tools better at tasks like finding matching images for captions or identifying objects in photos, which is useful for things like search engines, accessibility tools, or content moderation.
Abstract
We propose DiffCLIP, a novel vision-language model that extends the differential attention mechanism to CLIP architectures. Differential attention was originally developed for large language models to amplify relevant context while canceling out noisy information. In this work, we integrate this mechanism into CLIP's dual encoder (image and text) framework. With minimal additional parameters, DiffCLIP achieves superior performance on image-text understanding tasks. Across zero-shot classification, retrieval, and robustness benchmarks, DiffCLIP consistently outperforms baseline CLIP models. Notably, these gains come with negligible computational overhead, demonstrating that differential attention can significantly enhance multi-modal representations without sacrificing efficiency. Code can be found at https://github.com/hammoudhasan/DiffCLIP.