< Explain other AI papers

The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering

Zhuowei Li, Haizhou Shi, Yunhe Gao, Di Liu, Zhenting Wang, Yuxiao Chen, Ting Liu, Long Zhao, Hao Wang, Dimitris N. Metaxas

2025-02-11

The Hidden Life of Tokens: Reducing Hallucination of Large
  Vision-Language Models via Visual Information Steering

Summary

This paper talks about VISTA, a new method to reduce hallucinations in AI systems that work with both images and text. VISTA helps these systems stick more closely to what they actually see in images when generating descriptions or answers.

What's the problem?

Large Vision-Language Models (LVLMs) are really good at understanding images and text together, but they often make up information that isn't actually in the image. This is called hallucination, and it's a big problem because it means we can't always trust what these AI systems tell us about images.

What's the solution?

The researchers studied how LVLMs process information internally and found three important patterns. Based on these findings, they created VISTA, which works during the AI's thinking process to keep it focused on the real visual information. VISTA does this by boosting the importance of image-related information and using earlier stages of the AI's thought process to guide its final output. Importantly, VISTA doesn't need any extra training or outside help to work.

Why it matters?

This matters because it makes AI systems that work with images and text more reliable and trustworthy. By reducing hallucinations by about 40%, VISTA could help these AI systems be used more confidently in real-world applications like helping visually impaired people, improving image search, or assisting in medical image analysis. It's a big step towards making sure AI tells us what's really in an image, not what it thinks might be there.

Abstract

Large Vision-Language Models (LVLMs) can reason effectively over both textual and visual inputs, but they tend to hallucinate syntactically coherent yet visually ungrounded contents. In this paper, we investigate the internal dynamics of hallucination by examining the tokens logits rankings throughout the generation process, revealing three key patterns in how LVLMs process information: (1) gradual visual information loss -- visually grounded tokens gradually become less favored throughout generation, and (2) early excitation -- semantically meaningful tokens achieve peak activation in the layers earlier than the final layer. (3) hidden genuine information -- visually grounded tokens though not being eventually decided still retain relatively high rankings at inference. Based on these insights, we propose VISTA (Visual Information Steering with Token-logit Augmentation), a training-free inference-time intervention framework that reduces hallucination while promoting genuine information. VISTA works by combining two complementary approaches: reinforcing visual information in activation space and leveraging early layer activations to promote semantically meaningful decoding. Compared to existing methods, VISTA requires no external supervision and is applicable to various decoding strategies. Extensive experiments show that VISTA on average reduces hallucination by abount 40% on evaluated open-ended generation task, and it consistently outperforms existing methods on four benchmarks across four architectures under three decoding strategies.