Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens
Yiming Qin, Bomin Wei, Jiaxin Ge, Konstantinos Kallidromitis, Stephanie Fu, Trevor Darrell, Xudong Wang
2025-11-25
Summary
This paper introduces a new way to help Vision-Language Models, which are good at understanding and generating text related to images, become better at actually *seeing* and understanding the details within those images.
What's the problem?
Current Vision-Language Models are really good at thinking about images using words, but they struggle with tasks that require detailed visual understanding, like figuring out the 3D shape of objects or understanding how things are positioned in space. This is because they don't have a good way to capture and process all the important visual information in an image across its different parts.
What's the solution?
The researchers developed a framework called Chain-of-Visual-Thought, or COVT. This system lets the model 'think' using visual tokens – essentially, compact codes that represent important visual features like shapes, depth, and edges. The model learns to predict these visual tokens by reconstructing details from the image, like depth maps or outlines. Then, when answering questions about an image, it uses these visual tokens to reason, making it more accurate and efficient.
Why it matters?
This work is important because it significantly improves the ability of these models to understand images in a more detailed and accurate way. By allowing them to reason visually, rather than just relying on words, it makes them better at tasks requiring spatial reasoning and geometric awareness, leading to more reliable and interpretable results across a wide range of image understanding challenges.
Abstract
Vision-Language Models (VLMs) excel at reasoning in linguistic space but struggle with perceptual understanding that requires dense visual perception, e.g., spatial reasoning and geometric awareness. This limitation stems from the fact that current VLMs have limited mechanisms to capture dense visual information across spatial dimensions. We introduce Chain-of-Visual-Thought (COVT), a framework that enables VLMs to reason not only in words but also through continuous visual tokens-compact latent representations that encode rich perceptual cues. Within a small budget of roughly 20 tokens, COVT distills knowledge from lightweight vision experts, capturing complementary properties such as 2D appearance, 3D geometry, spatial layout, and edge structure. During training, the VLM with COVT autoregressively predicts these visual tokens to reconstruct dense supervision signals (e.g., depth, segmentation, edges, and DINO features). At inference, the model reasons directly in the continuous visual token space, preserving efficiency while optionally decoding dense predictions for interpretability. Evaluated across more than ten diverse perception benchmarks, including CV-Bench, MMVP, RealWorldQA, MMStar, WorldMedQA, and HRBench, integrating COVT into strong VLMs such as Qwen2.5-VL and LLaVA consistently improves performance by 3% to 16% and demonstrates that compact continuous visual thinking enables more precise, grounded, and interpretable multimodal intelligence.