< Explain other AI papers

ARGenSeg: Image Segmentation with Autoregressive Image Generation Model

Xiaolong Wang, Lixiang Ru, Ziyuan Huang, Kaixiang Ji, Dandan Zheng, Jingdong Chen, Jun Zhou

2025-10-24

ARGenSeg: Image Segmentation with Autoregressive Image Generation Model

Summary

This paper introduces a new way to perform image segmentation, which is the process of identifying objects and their boundaries within an image, by using large language models that can understand both text and images.

What's the problem?

Existing methods for combining image segmentation with these powerful language models often struggle to capture fine details in images. They typically represent images in a simplified way, like just using boundary points, or add extra components specifically for segmentation. This limits the language model’s ability to truly ‘see’ and understand the image at the pixel level, hindering accurate and detailed segmentation.

What's the solution?

The researchers developed a system called ARGenSeg that treats segmentation as an image generation task. Instead of directly predicting a segmentation map, the language model generates visual ‘tokens’ which are then converted back into a full image representing the segmented objects. This leverages the language model’s understanding of pixels to create detailed masks. To speed things up, they also developed a method to generate these tokens in parallel for different scales of the image.

Why it matters?

This work is important because it significantly improves the accuracy and speed of image segmentation when using large language models. By framing segmentation as image generation, it allows the model to utilize its full visual understanding capabilities, leading to better results than previous approaches and opening the door for more sophisticated image analysis.

Abstract

We propose a novel AutoRegressive Generation-based paradigm for image Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level perception within a unified framework. Prior works integrating image segmentation into multimodal large language models (MLLMs) typically employ either boundary points representation or dedicated segmentation heads. These methods rely on discrete representations or semantic prompts fed into task-specific decoders, which limits the ability of the MLLM to capture fine-grained visual details. To address these challenges, we introduce a segmentation framework for MLLM based on image generation, which naturally produces dense masks for target objects. We leverage MLLM to output visual tokens and detokenize them into images using an universal VQ-VAE, making the segmentation fully dependent on the pixel-level understanding of the MLLM. To reduce inference latency, we employ a next-scale-prediction strategy to generate required visual tokens in parallel. Extensive experiments demonstrate that our method surpasses prior state-of-the-art approaches on multiple segmentation datasets with a remarkable boost in inference speed, while maintaining strong understanding capabilities.