< Explain other AI papers

Interleaved Scene Graph for Interleaved Text-and-Image Generation Assessment

Dongping Chen, Ruoxi Chen, Shu Pu, Zhaoyi Liu, Yanru Wu, Caixi Chen, Benlin Liu, Yue Huang, Yao Wan, Pan Zhou, Ranjay Krishna

2024-11-27

Interleaved Scene Graph for Interleaved Text-and-Image Generation Assessment

Summary

This paper introduces ISG, a new framework for evaluating how well models generate text and images together, similar to how a cookbook provides instructions with pictures.

What's the problem?

Many models that generate text and images together struggle to keep the information consistent and coherent. For example, when a user asks for a recipe, the model might not accurately match the text instructions with the correct images, leading to confusion and poor quality outputs.

What's the solution?

ISG (Interleaved Scene Graph) addresses this issue by using a scene graph structure to analyze the relationships between text and image blocks. It evaluates generated content on four levels: overall quality, structure, specific blocks of text and images, and individual images. Additionally, the authors created a benchmark called ISG-Bench with 1,150 samples to test these models effectively. This benchmark helps identify how well models perform in generating interleaved content, revealing that traditional models often do not perform well in this area.

Why it matters?

This research is important because it provides a systematic way to evaluate and improve models that generate both text and images. By developing ISG and ISG-Bench, the authors aim to enhance the quality of interleaved text-and-image generation, which can significantly benefit applications like online tutorials, educational content, and interactive media.

Abstract

Many real-world user queries (e.g. "How do to make egg fried rice?") could benefit from systems capable of generating responses with both textual steps with accompanying images, similar to a cookbook. Models designed to generate interleaved text and images face challenges in ensuring consistency within and across these modalities. To address these challenges, we present ISG, a comprehensive evaluation framework for interleaved text-and-image generation. ISG leverages a scene graph structure to capture relationships between text and image blocks, evaluating responses on four levels of granularity: holistic, structural, block-level, and image-specific. This multi-tiered evaluation allows for a nuanced assessment of consistency, coherence, and accuracy, and provides interpretable question-answer feedback. In conjunction with ISG, we introduce a benchmark, ISG-Bench, encompassing 1,150 samples across 8 categories and 21 subcategories. This benchmark dataset includes complex language-vision dependencies and golden answers to evaluate models effectively on vision-centric tasks such as style transfer, a challenging area for current models. Using ISG-Bench, we demonstrate that recent unified vision-language models perform poorly on generating interleaved content. While compositional approaches that combine separate language and image models show a 111% improvement over unified models at the holistic level, their performance remains suboptimal at both block and image levels. To facilitate future work, we develop ISG-Agent, a baseline agent employing a "plan-execute-refine" pipeline to invoke tools, achieving a 122% performance improvement.