ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
Xiaoxing Hu, Kaicheng Yang, Ziyong Feng, Qi Ming, Zonghao Guo, Xiang An, Ziyong Feng, Junchi Yan, Xue Yang
2025-10-22
Summary
This paper introduces ProCLIP, a new method for improving how well AI models understand images and text together, especially when dealing with longer pieces of text and multiple languages.
What's the problem?
The original CLIP model, which connects images and text, has trouble with long descriptions because it can only process a limited number of words. It also doesn't work well with languages other than English. Attempts to fix this by swapping out the text part of CLIP with more powerful language models haven't been fully successful because these new language models weren't initially designed to work *with* CLIP's image understanding system, potentially messing up what CLIP already knows about images.
What's the solution?
ProCLIP solves this by carefully teaching a new language model to work with CLIP's image understanding system. It first has the language model learn from CLIP's existing text understanding abilities, then gradually aligns the two systems using a technique called contrastive learning. To prevent the language model from overriding CLIP’s existing knowledge, ProCLIP uses a method called self-distillation and focuses on aligning the *meaning* of words and the overall structure of how information is represented in both systems.
Why it matters?
This research is important because it allows AI to better understand complex relationships between images and text, opening the door to more advanced applications like improved image search, more accurate image captioning, and AI systems that can handle a wider range of languages and detailed descriptions.
Abstract
The original CLIP text encoder is limited by a maximum input length of 77 tokens, which hampers its ability to effectively process long texts and perform fine-grained semantic understanding. In addition, the CLIP text encoder lacks support for multilingual inputs. All these limitations significantly restrict its applicability across a broader range of tasks. Recent studies have attempted to replace the CLIP text encoder with an LLM-based embedder to enhance its ability in processing long texts, multilingual understanding, and fine-grained semantic comprehension. However, because the representation spaces of LLMs and the vision-language space of CLIP are pretrained independently without alignment priors, direct alignment using contrastive learning can disrupt the intrinsic vision-language alignment in the CLIP image encoder, leading to an underutilization of the knowledge acquired during pre-training. To address this challenge, we propose ProCLIP, a curriculum learning-based progressive vision-language alignment framework to effectively align the CLIP image encoder with an LLM-based embedder. Specifically, ProCLIP first distills knowledge from CLIP's text encoder into the LLM-based embedder to leverage CLIP's rich pretrained knowledge while establishing initial alignment between the LLM embedder and CLIP image encoder. Subsequently, ProCLIP further aligns the CLIP image encoder with the LLM-based embedder through image-text contrastive tuning, employing self-distillation regularization to avoid overfitting. To achieve a more effective alignment, instance semantic alignment loss and embedding structure alignment loss are employed during representation inheritance and contrastive tuning. The Code is available at https://github.com/VisionXLab/ProCLIP