From Generalist to Specialist: Adapting Vision Language Models via Task-Specific Visual Instruction Tuning
Yang Bai, Yang Zhou, Jun Zhou, Rick Siow Mong Goh, Daniel Shu Wei Ting, Yong Liu
2024-10-14

Summary
This paper introduces VITask, a new framework designed to help large vision language models (VLMs) perform better on specific tasks by integrating task-specific models that guide their learning.
What's the problem?
Large vision language models can do many things, but they often struggle with specific tasks because they are trained on general data. When these models are fine-tuned for particular applications, they can have gaps in knowledge that make them less effective. This means they don't always perform well when faced with specialized tasks.
What's the solution?
VITask improves the adaptability of VLMs by using three main strategies: exemplar prompting (EP), which helps the model learn from examples; response distribution alignment (RDA), which allows the model to adjust its answers based on previous learning; and contrastive response tuning (CRT), which helps the model rank the best answers more effectively. Together, these strategies enhance the model's ability to generate accurate responses for specific tasks, such as medical diagnoses from images.
Why it matters?
This research is significant because it shows how to make VLMs more effective for specialized applications while still maintaining their general capabilities. By improving task-specific performance, VITask can lead to better outcomes in fields like healthcare, where accurate image analysis is crucial.
Abstract
Large vision language models (VLMs) combine large language models with vision encoders, demonstrating promise across various tasks. However, they often underperform in task-specific applications due to domain gaps between pre-training and fine-tuning. We introduce VITask, a novel framework that enhances task-specific adaptability of VLMs by integrating task-specific models (TSMs). VITask employs three key strategies: exemplar prompting (EP), response distribution alignment (RDA), and contrastive response tuning (CRT) to improve the task-specific performance of VLMs by adjusting their response distributions. EP allows TSM features to guide VLMs, while RDA enables VLMs to adapt without TSMs during inference by learning from exemplar-prompted models. CRT further optimizes the ranking of correct image-response pairs, thereby reducing the risk of generating undesired responses. Experiments on 12 medical diagnosis datasets across 9 imaging modalities show that VITask outperforms both vanilla instruction-tuned VLMs and TSMs, showcasing its ability to integrate complementary features from both models effectively. Additionally, VITask offers practical advantages such as flexible TSM integration and robustness to incomplete instructions, making it a versatile and efficient solution for task-specific VLM tuning. Our code are available at https://github.com/baiyang4/VITask.