< Explain other AI papers

Switch-KD: Visual-Switch Knowledge Distillation for Vision-Language Models

Haoyi Sun, Xiaoxiao Wang, Ning Mao, Qian Wang, Lifu Mu, Wen Zheng, Tao Wei, Wei Chen

2026-04-17

Switch-KD: Visual-Switch Knowledge Distillation for Vision-Language Models

Summary

This paper focuses on making large vision-language models, which are good at understanding both images and text, smaller and more efficient without losing their ability to perform well.

What's the problem?

Large vision-language models are powerful, but their size makes them difficult to use on devices with limited resources, like phones or embedded systems. A common technique to shrink models, called knowledge distillation, struggles with these models because it usually treats the image and text parts separately, failing to fully transfer the combined understanding the original model has.

What's the solution?

The researchers developed a new method called Switch-KD. This method essentially forces the smaller 'student' model to think like the larger 'teacher' model by converting the student’s image understanding into the same text-based format the teacher uses. It also uses a clever loss function, DBiLD, that focuses on aligning the most important parts of the student’s and teacher’s predictions while keeping the overall structure of those predictions consistent. This ensures a more complete and accurate transfer of knowledge.

Why it matters?

This work is important because it allows for the creation of smaller, more practical vision-language models that can be deployed in more places. They showed that a significantly smaller model, TinyLLaVA, could learn a lot from a larger model and perform much better on various tasks without needing any changes to its basic design, opening the door for wider use of these powerful AI systems.

Abstract

Vision-Language Models (VLMs) have shown remarkable capabilities in joint vision-language understanding, but their large scale poses significant challenges for deployment in resource-constrained scenarios. Knowledge Distillation (KD) offers a viable way to improve model capabilities without increasing model size or data requirements, making deployment more efficient. However, applying KD to VLMs is challenged by modality-specific supervision: although multimodal knowledge in VLMs is fused within the language space, current methods supervise each modality separately without explicitly addressing multimodal alignment, leading to inconsistent multimodal knowledge transfer. To address this, we propose Switch-KD, a visual-switch distillation framework that unifies vision-language knowledge transfer within a shared text-probability space. Switch-KD comprises two key components: (1) Visual-Switch Distillation, which switches the student's visual outputs into the teacher's language pathway to construct cross-modal probabilistic references for implicit visual knowledge transfer; and (2) Dynamic Bi-directional Logits Difference (DBiLD) loss, which adaptively aligns informative probability regions while preserving the distributional structures of teacher and student through bidirectional supervision. Guided by Switch-KD, a 0.5B TinyLLaVA effectively distills rich multimodal knowledge from its 3B teacher, yielding an average improvement of 3.6 points across 10 multimodal benchmarks without any architectural modification.