< Explain other AI papers

Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs

Huanyu Zhang, Wenshan Wu, Chengzu Li, Ning Shang, Yan Xia, Yangyu Huang, Yifan Zhang, Li Dong, Zhang Zhang, Liang Wang, Tieniu Tan, Furu Wei

2025-10-29

Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs

Summary

This paper introduces a new way to help AI models that can 'see' and 'understand' images, called Multimodal Large Language Models, become better at tasks requiring planning and imagination.

What's the problem?

Current AI models are good at recognizing what's *in* an image, but they struggle with problems that need them to think ahead visually or create a plan, like figuring out how to navigate a maze. It's like they can see the pieces, but can't visualize how to put them together to solve a problem. They lack a way to 'sketch out' ideas internally.

What's the solution?

The researchers created a system called 'Latent Sketchpad' that gives these AI models an internal 'scratchpad' for visual thinking. It works by letting the AI generate simplified visual representations – think of them as rough sketches – alongside its usual text-based reasoning. These sketches aren't directly shown to us all the time, but they help the AI think through the problem. They built two key parts: one that creates these internal visual representations and another that turns them into actual images we can see, making the AI's thought process more understandable. This system was tested on a new maze-solving dataset.

Why it matters?

This research is important because it allows AI to move beyond just *seeing* things to actually *thinking* visually. This opens up possibilities for more natural and helpful interactions between humans and computers, and could lead to AI being used in more complex and creative ways, like helping with design or problem-solving in fields that require spatial reasoning.

Abstract

While Multimodal Large Language Models (MLLMs) excel at visual understanding, they often struggle in complex scenarios that require visual planning and imagination. Inspired by how humans use sketching as a form of visual thinking to develop and communicate ideas, we introduce Latent Sketchpad, a framework that equips MLLMs with an internal visual scratchpad. The internal visual representations of MLLMs have traditionally been confined to perceptual understanding. We repurpose them to support generative visual thought without compromising reasoning ability. Building on frontier MLLMs, our approach integrates visual generation directly into their native autoregressive reasoning process. It allows the model to interleave textual reasoning with the generation of visual latents. These latents guide the internal thought process and can be translated into sketch images for interpretability. To realize this, we introduce two components: a Context-Aware Vision Head autoregressively produces visual representations, and a pretrained Sketch Decoder renders these into human-interpretable images. We evaluate the framework on our new dataset MazePlanning. Experiments across various MLLMs show that Latent Sketchpad delivers comparable or even superior reasoning performance to their backbone. It further generalizes across distinct frontier MLLMs, including Gemma3 and Qwen2.5-VL. By extending model's textual reasoning to visual thinking, our framework opens new opportunities for richer human-computer interaction and broader applications. More details and resources are available on our project page: https://latent-sketchpad.github.io/.