< Explain other AI papers

UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual Generation

Jie Liu, Zilyu Ye, Linxiao Yuan, Shenhan Zhu, Yu Gao, Jie Wu, Kunchang Li, Xionghui Wang, Xiaonan Nie, Weilin Huang, Wanli Ouyang

2026-03-25

UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual Generation

Summary

This paper introduces a new way to train AI models that can both understand text and create images, allowing them to work together in a back-and-forth process. It focuses on improving how these models reason about a request before generating an image based on that request.

What's the problem?

Current AI models often handle text and images separately. When you want a model to create an image based on a complex idea, it struggles to understand the nuances of the request and generate a truly relevant image. Specifically, training models to *think* through a problem before creating an image is difficult, and existing methods don't scale well to more complex, multi-step interactions.

What's the solution?

The researchers developed a unified reinforcement learning framework called UniGRPO. This framework treats the process of reasoning and image creation as a single, continuous task. They used existing techniques for text and image generation but modified them to work seamlessly together. A key change was simplifying the image generation process to make it more stable and scalable, and they changed how the model is penalized for making mistakes to prevent it from finding loopholes in the training process.

Why it matters?

This work is important because it provides a solid foundation for building more sophisticated AI systems that can engage in complex, multi-turn conversations and generate images based on detailed reasoning. It’s a step towards AI that can truly understand and respond to user needs in a creative and intelligent way, and it offers a way to improve the quality of AI-generated images when they require thoughtful consideration.

Abstract

Unified models capable of interleaved generation have emerged as a promising paradigm, with the community increasingly converging on autoregressive modeling for text and flow matching for image generation. To advance this direction, we propose a unified reinforcement learning framework tailored for interleaved generation. We validate our approach on its fundamental unit: a single round of reasoning-driven image generation, where the model first expands the user prompt through reasoning, followed by image synthesis. Formulating this multimodal generation process as a Markov Decision Process with sparse terminal rewards, we introduce UniGRPO to jointly optimize text and image generation policies using GRPO. Adopting a minimalist methodology to avoid over-design, we leverage established training recipes for both modalities by seamlessly integrating standard GRPO for reasoning and FlowGRPO for visual synthesis. To ensure scalability to multi-round interleaved generation, we introduce two critical modifications to the original FlowGRPO: (1) eliminating classifier-free guidance to maintain linear, unbranched rollouts, which is essential for scaling to complex scenarios involving multi-turn interactions and multi-condition generation (e.g., editing); and (2) replacing the standard latent KL penalty with an MSE penalty directly on the velocity fields, providing a more robust and direct regularization signal to mitigate reward hacking effectively. Our experiments demonstrate that this unified training recipe significantly enhances image generation quality through reasoning, providing a robust and scalable baseline for the future post-training of fully interleaved models.