FlowTok: Flowing Seamlessly Across Text and Image Tokens
Ju He, Qihang Yu, Qihao Liu, Liang-Chieh Chen
2025-03-17
Summary
This paper introduces FlowTok, a streamlined method for AI to seamlessly connect text and images, which is essential for tasks like creating images from text descriptions.
What's the problem?
AI models often struggle to bridge the gap between text and images because they are represented in very different ways. Text is a sequence of words (1D tokens), while images are spatial and more complex (2D latent embeddings).
What's the solution?
FlowTok encodes images into a compact 1D token representation, similar to text, allowing the AI to easily 'flow' between text and image modalities. This eliminates the need for complicated methods to align text and images.
Why it matters?
This work matters because it provides a simpler and more efficient way for AI to work with both text and images, requiring fewer resources and generating results faster while maintaining high quality.
Abstract
Bridging different modalities lies at the heart of cross-modality generation. While conventional approaches treat the text modality as a conditioning signal that gradually guides the denoising process from Gaussian noise to the target image modality, we explore a much simpler paradigm-directly evolving between text and image modalities through flow matching. This requires projecting both modalities into a shared latent space, which poses a significant challenge due to their inherently different representations: text is highly semantic and encoded as 1D tokens, whereas images are spatially redundant and represented as 2D latent embeddings. To address this, we introduce FlowTok, a minimal framework that seamlessly flows across text and images by encoding images into a compact 1D token representation. Compared to prior methods, this design reduces the latent space size by 3.3x at an image resolution of 256, eliminating the need for complex conditioning mechanisms or noise scheduling. Moreover, FlowTok naturally extends to image-to-text generation under the same formulation. With its streamlined architecture centered around compact 1D tokens, FlowTok is highly memory-efficient, requires significantly fewer training resources, and achieves much faster sampling speeds-all while delivering performance comparable to state-of-the-art models. Code will be available at https://github.com/bytedance/1d-tokenizer.