< Explain other AI papers

MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs

Jiarui Zhang, Mahyar Khayatkhoei, Prateek Chhikara, Filip Ilievski

2025-02-26

MLLMs Know Where to Look: Training-free Perception of Small Visual
  Details with Multimodal LLMs

Summary

This paper talks about a new idea called the Lottery LLM Hypothesis, which suggests that we can make AI language models smaller and more efficient without losing their abilities

What's the problem?

Big AI language models are really good at tasks, but they take up a lot of computer power and storage. Current ways of making these models smaller focus on keeping them good at simple tasks, but they might be losing other important abilities in the process

What's the solution?

The researchers propose the Lottery LLM Hypothesis, which says that for any big AI model, there's a smaller version that can do just as well if it's given the right tools and ways to think through problems step-by-step. They look at recent improvements in AI, like using external information and breaking down complex tasks, to figure out what abilities these smaller models need to keep

Why it matters?

This matters because it could lead to AI models that are just as smart but use less energy and storage. This would make advanced AI more accessible and environmentally friendly. It also challenges how we think about making AI models smaller, suggesting we need to focus on preserving a wider range of abilities, not just performance on simple tasks

Abstract

Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visual details as effectively as large ones when answering questions about images. We observe that their performance is very sensitive to the size of the visual subject of the question, and further show that this effect is in fact causal by conducting an intervention study. Next, we study the attention patterns of MLLMs when answering visual questions, and intriguingly find that they consistently know where to look, even when they provide the wrong answer. Based on these findings, we then propose training-free visual intervention methods that leverage the internal knowledge of any MLLM itself, in the form of attention and gradient maps, to enhance its perception of small visual details. We evaluate our proposed methods on two widely-used MLLMs and seven visual question answering benchmarks and show that they can significantly improve MLLMs' accuracy without requiring any training. Our results elucidate the risk of applying MLLMs to visual recognition tasks concerning small details and indicate that visual intervention using the model's internal state is a promising direction to mitigate this risk.