< Explain other AI papers

Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model

Wenqi Zhang, Zhenglin Cheng, Yuanyu He, Mengna Wang, Yongliang Shen, Zeqi Tan, Guiyang Hou, Mingqian He, Yanna Ma, Weiming Lu, Yueting Zhuang

2024-07-13

Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model

Summary

This paper discusses a new approach called Multimodal Self-Instruct, which helps large multimodal models (LMMs) improve their understanding of abstract images and visual reasoning tasks. It focuses on creating synthetic images and instructions to train these models better.

What's the problem?

While current LMMs can recognize natural images like photos of people and landscapes, they struggle with abstract images such as charts, maps, and diagrams. This limitation makes it difficult for them to perform everyday tasks, like telling time from a clock or following a flowchart. As a result, their ability to reason visually is still quite basic.

What's the solution?

To address this issue, the authors developed a method that uses large language models to generate a large number of abstract images and corresponding visual reasoning instructions. They created a benchmark with over 11,000 instructions covering various scenarios like charts and maps. Additionally, they fine-tuned an LMM using around 62,000 synthetic instructions to enhance its performance in understanding charts and navigating maps. This process helps the models learn better how to interpret and reason about abstract visual information.

Why it matters?

This research is important because it highlights the need for better training methods for AI models to handle abstract concepts. By improving how LMMs understand and reason about visual information, this work can lead to more effective AI applications in areas like education, navigation systems, and data analysis.

Abstract

Although most current large multimodal models (LMMs) can already understand photos of natural scenes and portraits, their understanding of abstract images, e.g., charts, maps, or layouts, and visual reasoning capabilities remains quite rudimentary. They often struggle with simple daily tasks, such as reading time from a clock, understanding a flowchart, or planning a route using a road map. In light of this, we design a multi-modal self-instruct, utilizing large language models and their code capabilities to synthesize massive abstract images and visual reasoning instructions across daily scenarios. Our strategy effortlessly creates a multimodal benchmark with 11,193 instructions for eight visual scenarios: charts, tables, simulated maps, dashboards, flowcharts, relation graphs, floor plans, and visual puzzles. This benchmark, constructed with simple lines and geometric elements, exposes the shortcomings of most advanced LMMs like Claude-3.5-Sonnet and GPT-4o in abstract image understanding, spatial relations reasoning, and visual element induction. Besides, to verify the quality of our synthetic data, we fine-tune an LMM using 62,476 synthetic chart, table and road map instructions. The results demonstrate improved chart understanding and map navigation performance, and also demonstrate potential benefits for other visual reasoning tasks. Our code is available at: https://github.com/zwq2018/Multi-modal-Self-instruct.