< Explain other AI papers

OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows

John Nguyen, Marton Havasi, Tariq Berrada, Luke Zettlemoyer, Ricky T. Q. Chen

2025-10-08

OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows

Summary

This paper introduces OneFlow, a new type of AI model that can create both images and text at the same time, and in a flexible way, unlike previous models.

What's the problem?

Existing AI models that generate images and text usually do so step-by-step, creating text first then the image, or vice versa. This rigid order limits how quickly and creatively they can work. They also can be computationally expensive, requiring a lot of processing power to train.

What's the solution?

OneFlow solves this by using two different techniques working together. For text, it uses a method that 'inserts' words as needed, and for images, it uses a technique called 'Flow Matching'. This allows the model to generate both images and text concurrently, meaning at the same time, and to prioritize the overall content rather than getting stuck on perfect grammar or details. They tested different sizes of the model, from smaller to larger, to see how well it performed.

Why it matters?

OneFlow is important because it's faster and more efficient than previous models, requiring less computing power for training. It also produces better results in both creating and understanding images and text, and opens up possibilities for more natural and flexible AI generation, like being able to refine creations step-by-step or generate content that seems to 'reason' through the process.

Abstract

We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.