Think in Strokes, Not Pixels: Process-Driven Image Generation via Interleaved Reasoning
Lei Zhang, Junjiao Tian, Zhipeng Fan, Kunpeng Li, Jialiang Wang, Weifeng Chen, Markos Georgopoulos, Felix Juefei-Xu, Yuxiang Bao, Julian McAuley, Manling Li, Zecheng He
2026-04-09
Summary
This paper explores a new way for AI to generate images, moving away from creating a picture all at once to building it up step-by-step, much like a human artist would.
What's the problem?
Current AI image generators typically create an image from a text description in a single go, which can lead to issues with consistency and detail. The big challenge is that it's hard for the AI to 'check its work' during the process – how does it know if a partially finished image is heading in the right direction? There's ambiguity in what the intermediate steps should look like.
What's the solution?
The researchers developed a method called 'process-driven image generation'. Instead of one step, the AI goes through four stages repeatedly: it plans with text, creates a rough visual draft, reflects on the draft with more text, and then refines the visual details. The text guides the visual changes, and the visuals then influence the next round of text planning. To help the AI evaluate its progress, they used a system that checks both the visual consistency of each draft and makes sure the text instructions are still being followed.
Why it matters?
This approach makes AI image generation more controllable, understandable, and easier to improve. By breaking down the process into steps, it's clearer *how* the AI is creating the image, and it allows for more direct feedback and correction, potentially leading to higher quality and more accurate results.
Abstract
Humans paint images incrementally: they plan a global layout, sketch a coarse draft, inspect, and refine details, and most importantly, each step is grounded in the evolving visual states. However, can unified multimodal models trained on text-image interleaved datasets also imagine the chain of intermediate states? In this paper, we introduce process-driven image generation, a multi-step paradigm that decomposes synthesis into an interleaved reasoning trajectory of thoughts and actions. Rather than generating images in a single step, our approach unfolds across multiple iterations, each consisting of 4 stages: textual planning, visual drafting, textual reflection, and visual refinement. The textual reasoning explicitly conditions how the visual state should evolve, while the generated visual intermediate in turn constrains and grounds the next round of textual reasoning. A core challenge of process-driven generation stems from the ambiguity of intermediate states: how can models evaluate each partially-complete image? We address this through dense, step-wise supervision that maintains two complementary constraints: for the visual intermediate states, we enforce the spatial and semantic consistency; for the textual intermediate states, we preserve the prior visual knowledge while enabling the model to identify and correct prompt-violating elements. This makes the generation process explicit, interpretable, and directly supervisable. To validate proposed method, we conduct experiments under various text-to-image generation benchmarks.