< Explain other AI papers

Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation

Ziyu Guo, Renrui Zhang, Hongyu Li, Manyuan Zhang, Xinyan Chen, Sifan Wang, Yan Feng, Peng Pei, Pheng-Ann Heng

2025-11-21

Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation

Summary

This paper introduces a new way to create images using artificial intelligence, focusing on making the AI 'think' during the image creation process, not just before or after.

What's the problem?

Current AI image generators often lack a continuous thought process while creating images. They might plan what to draw beforehand or refine the image afterward, but they don't really 'think' and adjust as they go, leading to images that sometimes lack context or don't quite make sense as a whole.

What's the solution?

The researchers developed a framework called 'Thinking-while-Generating' (TwiG) where the AI constantly reasons with text *while* it's building the image. It uses this reasoning to guide what it draws next and to check if what it's already drawn still fits the overall idea. They tested different ways to make this 'thinking' happen, like using pre-written instructions, training the AI specifically for this task with a new dataset, and using a reward system to encourage good reasoning.

Why it matters?

This research is important because it moves AI image generation closer to creating truly intelligent and contextually aware visuals. By allowing the AI to think and adapt during the creation process, it can produce more detailed, coherent, and meaningful images, potentially leading to better AI art and design tools.

Abstract

Recent advances in visual generation have increasingly explored the integration of reasoning capabilities. They incorporate textual reasoning, i.e., think, either before (as pre-planning) or after (as post-refinement) the generation process, yet they lack on-the-fly multimodal interaction during the generation itself. In this preliminary study, we introduce Thinking-while-Generating (TwiG), the first interleaved framework that enables co-evolving textual reasoning throughout the visual generation process. As visual content is progressively generating, textual reasoning is interleaved to both guide upcoming local regions and reflect on previously synthesized ones. This dynamic interplay produces more context-aware and semantically rich visual outputs. To unveil the potential of this framework, we investigate three candidate strategies, zero-shot prompting, supervised fine-tuning (SFT) on our curated TwiG-50K dataset, and reinforcement learning (RL) via a customized TwiG-GRPO strategy, each offering unique insights into the dynamics of interleaved reasoning. We hope this work inspires further research into interleaving textual reasoning for enhanced visual generation. Code will be released at: https://github.com/ZiyuGuo99/Thinking-while-Generating.