Large Multimodal Models as General In-Context Classifiers
Marco Garosi, Matteo Farina, Alessandro Conti, Massimiliano Mancini, Elisa Ricci
2026-03-06
Summary
This paper investigates which type of artificial intelligence model – specifically, those that can understand both images and text – is best for classifying things. It challenges the common belief that CLIP-like models are superior for simple classification tasks, arguing that newer, more complex Large Multimodal Models (LMMs) have hidden potential.
What's the problem?
Currently, it's generally thought that CLIP-like models are best for quickly classifying images without needing much training data, while LMMs are better for more complicated tasks. However, this paper points out that people haven't fully explored LMMs' ability to learn *from examples* provided right when you ask them a question – a skill called 'in-context learning'. The problem is that while LMMs don't perform as well as CLIP models initially, their potential with in-context learning hasn't been fully realized, especially when dealing with classifying things that aren't neatly defined (open-world classification). Furthermore, LMMs struggle when the examples given to them aren't perfect or clear.
What's the solution?
The researchers tested several state-of-the-art LMMs on various classification tasks. They found that with just a few good examples, LMMs could perform as well as, or even better than, CLIP-like models. To improve LMMs in situations where the examples are unclear (open-world classification), they developed a method called CIRCLE. CIRCLE automatically assigns labels to the examples and then refines those labels based on the context of the task, essentially helping the LMM understand the examples better without any extra training.
Why it matters?
This research is important because it shows that LMMs aren't just for complex tasks; they can be a powerful and flexible alternative for standard classification problems too. By highlighting the potential of in-context learning and introducing CIRCLE, the paper suggests that LMMs could become a single, unified model capable of handling a wide range of classification tasks, reducing the need for specialized models and making AI more adaptable.
Abstract
Which multimodal model should we use for classification? Previous studies suggest that the answer lies in CLIP-like contrastive Vision-Language Models (VLMs), due to their remarkable performance in zero-shot classification. In contrast, Large Multimodal Models (LMM) are more suitable for complex tasks. In this work, we argue that this answer overlooks an important capability of LMMs: in-context learning. We benchmark state-of-the-art LMMs on diverse datasets for closed-world classification and find that, although their zero-shot performance is lower than CLIP's, LMMs with a few in-context examples can match or even surpass contrastive VLMs with cache-based adapters, their "in-context" equivalent. We extend this analysis to the open-world setting, where the generative nature of LMMs makes them more suitable for the task. In this challenging scenario, LMMs struggle whenever provided with imperfect context information. To address this issue, we propose CIRCLE, a simple training-free method that assigns pseudo-labels to in-context examples, iteratively refining them with the available context itself. Through extensive experiments, we show that CIRCLE establishes a robust baseline for open-world classification, surpassing VLM counterparts and highlighting the potential of LMMs to serve as unified classifiers, and a flexible alternative to specialized models.