Tinted Frames: Question Framing Blinds Vision-Language Models
Wan-Cyuan Fan, Jiayun Luo, Declan Kutscher, Leonid Sigal, Ritwik Gupta
2026-03-20
Summary
This paper investigates a surprising flaw in Vision-Language Models (VLMs), which are AI systems designed to understand both images and text. It finds that these models don't always 'look' at the images as much as they should, even when the task requires it, and that *how* a question is asked dramatically changes how much attention the model pays to the visual information.
What's the problem?
VLMs are often surprisingly bad at using the visual information they're given, even when it's crucial for answering a question. The researchers discovered this isn't a general failure to see, but a 'selective blindness' – the models change how much they focus on the image based on the way the question is worded. For example, a multiple-choice question makes the model pay less attention to the image overall, and focus on less important parts, compared to an open-ended question where it needs to describe what it sees. This leads to inaccurate answers and inconsistent results depending on the question format.
What's the solution?
To fix this, the researchers developed a simple technique called prompt-tuning. They added a few 'learnable tokens' to the beginning of the question, which essentially 'trains' the model to pay more consistent attention to the relevant parts of the image, regardless of how the question is asked. This encourages the model to use the visual information in a more reliable and grounded way, similar to how it behaves with open-ended questions.
Why it matters?
This research is important because it reveals a fundamental weakness in current VLMs. If these models can be easily tricked into ignoring important visual details simply by changing the wording of a question, it limits their reliability and usefulness in real-world applications like image captioning, visual question answering, and robotics. The proposed solution offers a practical way to improve the visual grounding of these models and make them more robust and accurate.
Abstract
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.