< Explain other AI papers

Lost in Embeddings: Information Loss in Vision-Language Models

Wenyan Li, Raphael Tang, Chengzu Li, Caiqi Zhang, Ivan Vulić, Anders Søgaard

2025-09-16

Lost in Embeddings: Information Loss in Vision-Language Models

Summary

This paper investigates how much information is lost when vision-language models (VLMs) translate images into a format the language part of the model can understand.

What's the problem?

VLMs use a process where images are first analyzed by a 'vision encoder' and then converted into a language-like representation by a 'connector'. This connector is essential, but the researchers noticed that this conversion step might be throwing away important details from the original image, potentially hurting the model's performance. They wanted to figure out *how much* information is actually lost during this translation and *where* that loss happens within the image.

What's the solution?

The researchers used two main methods to measure this information loss. First, they looked at how the relationships between different images change after the connector translates them. If similar images are no longer close to each other after the translation, it suggests information was lost. Second, they tried to reconstruct the original image representation from the translated version. If they couldn't accurately rebuild the image details, it meant the connector had discarded crucial information, and they pinpointed *which parts* of the image were hardest to reconstruct. They focused on how well the model could answer questions about images, linking areas of high information loss to where the model struggled.

Why it matters?

Understanding this information loss is important because it helps us build better VLMs. If we know where the connector is failing to preserve image details, we can improve its design and create models that are more accurate and reliable, especially for tasks that require a deep understanding of visual content like answering questions about images.

Abstract

Vision--language models (VLMs) often process visual inputs through a pretrained vision encoder, followed by a projection into the language model's embedding space via a connector component. While crucial for modality fusion, the potential information loss induced by this projection step and its direct impact on model capabilities remain understudied. We introduce two complementary approaches to examine and quantify this loss by analyzing the latent representation space. First, we evaluate semantic information preservation by analyzing changes in k-nearest neighbor relationships between image representations, before and after projection. Second, we directly measure information loss by reconstructing visual embeddings from the projected representation, localizing loss at an image patch level. Experiments reveal that connectors substantially distort the local geometry of visual representations, with k-nearest neighbors diverging by 40--60\% post-projection, correlating with degradation in retrieval performance. The patch-level embedding reconstruction provides interpretable insights for model behavior on visually grounded question-answering tasks, finding that areas of high information loss reliably predict instances where models struggle.