< Explain other AI papers

Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models

Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna

2024-06-14

Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models

Summary

This paper introduces Sketchpad, a new framework that allows multimodal language models (LMs) to use sketching as a way to enhance their reasoning abilities. By enabling LMs to draw and interact with visual content, Sketchpad helps them think more like humans do.

What's the problem?

Current multimodal language models primarily rely on text for reasoning and problem-solving, which limits their ability to understand complex visual tasks. Unlike humans, who often use drawings and sketches to clarify their thoughts and ideas, these models do not have the capability to incorporate visual aids into their reasoning processes. This can make it difficult for them to tackle tasks that require spatial understanding or visual reasoning.

What's the solution?

The authors developed Sketchpad to fill this gap by providing LMs with a visual sketchpad where they can draw lines, boxes, and other shapes. This allows the models to create visual representations of their reasoning steps, similar to how people sketch out ideas when solving problems. Additionally, Sketchpad integrates specialized vision models that can enhance the drawing process, such as detecting objects or creating masks. The authors tested Sketchpad on various math and visual reasoning tasks and found that it significantly improved performance compared to models that only used text.

Why it matters?

This research is important because it shows how incorporating drawing and sketching into AI reasoning can enhance the capabilities of language models. By mimicking human thought processes more closely, Sketchpad can help improve the performance of AI in real-world applications, such as education, design, and complex problem-solving scenarios.

Abstract

Humans draw to facilitate reasoning: we draw auxiliary lines when solving geometry problems; we mark and circle when reasoning on maps; we use sketches to amplify our ideas and relieve our limited-capacity working memory. However, such actions are missing in current multimodal language models (LMs). Current chain-of-thought and tool-use paradigms only use text as intermediate reasoning steps. In this work, we introduce Sketchpad, a framework that gives multimodal LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts planning and reasoning according to the visual artifacts it has drawn. Different from prior work, which uses text-to-image models to enable LMs to draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is closer to human sketching and better facilitates reasoning. Sketchpad can also use specialist vision models during the sketching process (e.g., draw bounding boxes with object detection models, draw masks with segmentation models), to further enhance visual perception and reasoning. We experiment with a wide range of math tasks (including geometry, functions, graphs, and chess) and complex visual reasoning tasks. Sketchpad substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial reasoning (83.9%), and visual correspondence (80.8%). All codes and data are in https://visualsketchpad.github.io/.