VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement Learning
Xianwei Zhuang, Yuxin Xie, Yufan Deng, Dongchao Yang, Liming Liang, Jinghan Ru, Yuguo Yin, Yuexian Zou
2025-04-07
Summary
This paper talks about VARGPT-v1.1, an upgraded AI that can understand images and create/edit them using text instructions, like describing a scene to generate a picture or tweak an existing one.
What's the problem?
Previous versions struggled to handle both image understanding and creation smoothly, often needing different tools for editing and lacking high-quality output.
What's the solution?
VARGPT-v1.1 uses smarter training with feedback loops (like learning from its mistakes) and a bigger dataset of image-text pairs, while upgrading its core AI brain to handle higher-quality images and editing tasks.
Why it matters?
This helps create better AI tools for designers, educators, or apps needing instant image generation/editing without switching between multiple programs.
Abstract
In this work, we present VARGPT-v1.1, an advanced unified visual autoregressive model that builds upon our previous framework VARGPT. The model preserves the dual paradigm of next-token prediction for visual understanding and next-scale generation for image synthesis. Specifically, VARGPT-v1.1 integrates: (1) a novel training strategy combining iterative visual instruction tuning with reinforcement learning through Direct Preference Optimization (DPO), (2) an expanded training corpus containing 8.3M visual-generative instruction pairs, (3) an upgraded language model backbone using Qwen2, (4) enhanced image generation resolution, and (5) emergent image editing capabilities without architectural modifications. These advancements enable VARGPT-v1.1 to achieve state-of-the-art performance in multimodal understanding and text-to-image instruction-following tasks, demonstrating significant improvements in both comprehension and generation metrics. Notably, through visual instruction tuning, the model acquires image editing functionality while maintaining architectural consistency with its predecessor, revealing the potential for unified visual understanding, generation, and editing. Our findings suggest that well-designed unified visual autoregressive models can effectively adopt flexible training strategies from large language models (LLMs), exhibiting promising scalability. The codebase and model weights are publicly available at https://github.com/VARGPT-family/VARGPT-v1.1.