Fixing Imbalanced Attention to Mitigate In-Context Hallucination of Large Vision-Language Model
Kazi Hasan Ibn Arif, Sajib Acharjee Dip, Khizar Hussain, Lang Zhang, Chris Thomas
2025-01-22
Summary
This paper talks about a new way to make AI systems that understand images and text (called Large Vision Language Models or LVLMs) more accurate when describing what they see in pictures. The researchers found a way to reduce 'hallucinations,' which is when these AI systems make up things that aren't actually in the image.
What's the problem?
LVLMs are really good at understanding and describing images, but they often make mistakes by talking about objects or details that aren't actually in the picture. This happens because as the AI processes the image through its layers, it starts to lose focus on what's actually in the image and begins to make things up based on what it expects to see.
What's the solution?
The researchers came up with a clever fix that doesn't require completely retraining the AI. They created a method that helps the AI pay better attention to the important parts of the image throughout the whole process. They did this in two main ways: First, they made a system that picks out the most important visual details in the image. Second, they adjusted how different parts of the AI pay attention to these details, based on how good each part is at understanding visual information. This helps the AI stay focused on what's actually in the image instead of making things up.
Why it matters?
This matters because it makes AI systems that work with images and text much more reliable. By reducing hallucinations by up to 62.3%, it means these AIs are much less likely to describe things that aren't there. This is really important for using these systems in real-world applications where accuracy is crucial, like helping visually impaired people understand their surroundings or in automated image captioning for social media. It's a big step towards making AI we can trust to accurately describe what it sees, without needing to completely rebuild or retrain these complex systems.
Abstract
Large Vision Language Models (LVLMs) have demonstrated remarkable capabilities in understanding and describing visual content, achieving state-of-the-art performance across various vision-language tasks. However, these models frequently exhibit hallucination behavior, where they generate descriptions containing objects or details absent in the input image. Our work investigates this phenomenon by analyzing attention patterns across transformer layers and heads, revealing that hallucinations often stem from progressive degradation of visual grounding in deeper layers. We propose a novel attention modification approach that combines selective token emphasis and head-specific modulation to maintain visual grounding throughout the generation process. Our method introduces two key components: (1) a dual-stream token selection mechanism that identifies and prioritizes both locally informative and spatially significant visual tokens, and (2) an attention head-specific modulation strategy that differentially amplifies visual information processing based on measured visual sensitivity of individual attention heads. Through extensive experimentation on the MSCOCO dataset, we demonstrate that our approach reduces hallucination rates by up to 62.3\% compared to baseline models while maintaining comparable task performance. Our analysis reveals that selectively modulating tokens across attention heads with varying levels of visual sensitivity can significantly improve visual grounding without requiring model retraining.