< Explain other AI papers

Improving Visual Commonsense in Language Models via Multiple Image Generation

Guy Yariv, Idan Schwartz, Yossi Adi, Sagie Benaim

2024-06-21

Improving Visual Commonsense in Language Models via Multiple Image Generation

Summary

This paper discusses a new method for improving how large language models (LLMs) understand visual commonsense by generating multiple images from text prompts and using them to make better decisions.

What's the problem?

Many LLMs are trained mainly on text data, which limits their ability to understand important visual information. While Visual Language Models (VLMs) are good at tasks involving images, they often struggle with basic commonsense reasoning that requires understanding both text and visuals. This creates a gap between visual understanding and language reasoning, making it hard for models to perform well in tasks that require both.

What's the solution?

The researchers propose a method that generates several images based on a given text prompt. These images are then integrated into the model's decision-making process by mixing their prediction probabilities. To do this effectively, they use a technique called late fusion, which combines the visual features from the generated images with the output of a pre-trained LLM that is focused on text. This allows the model to make predictions based on a richer understanding that includes both images and text. Their experiments show that this approach significantly improves performance on tasks that require visual commonsense reasoning as well as traditional language tasks.

Why it matters?

This research is important because it enhances the capabilities of AI models to understand and reason about the world in a more human-like way. By improving how these models integrate visual and textual information, we can create more effective AI systems for applications like automated reasoning, reading comprehension, and even creative tasks that require understanding complex scenarios involving both words and images.

Abstract

Commonsense reasoning is fundamentally based on multimodal knowledge. However, existing large language models (LLMs) are primarily trained using textual data only, limiting their ability to incorporate essential visual information. In contrast, Visual Language Models, which excel at visually-oriented tasks, often fail at non-visual tasks such as basic commonsense reasoning. This divergence highlights a critical challenge - the integration of robust visual understanding with foundational text-based language reasoning. To this end, we introduce a method aimed at enhancing LLMs' visual commonsense. Specifically, our method generates multiple images based on the input text prompt and integrates these into the model's decision-making process by mixing their prediction probabilities. To facilitate multimodal grounded language modeling, we employ a late-fusion layer that combines the projected visual features with the output of a pre-trained LLM conditioned on text only. This late-fusion layer enables predictions based on comprehensive image-text knowledge as well as text only when this is required. We evaluate our approach using several visual commonsense reasoning tasks together with traditional NLP tasks, including common sense reasoning and reading comprehension. Our experimental results demonstrate significant superiority over existing baselines. When applied to recent state-of-the-art LLMs (e.g., Llama3), we observe improvements not only in visual common sense but also in traditional NLP benchmarks. Code and models are available under https://github.com/guyyariv/vLMIG.