JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
Yiyang Ma, Xingchao Liu, Xiaokang Chen, Wen Liu, Chengyue Wu, Zhiyu Wu, Zizheng Pan, Zhenda Xie, Haowei Zhang, Xingkai yu, Liang Zhao, Yisong Wang, Jiaying Liu, Chong Ruan
2024-11-13
Summary
This paper talks about JanusFlow, a new framework that combines image understanding and generation into one model. It uses a simple design that merges advanced language models with a technique called rectified flow to improve how machines interpret and create images.
What's the problem?
The problem is that existing models for understanding and generating images are often separate and require complex setups. This makes it hard to create a single model that can do both tasks efficiently. Additionally, many current methods struggle to perform well across different types of image tasks.
What's the solution?
JanusFlow addresses this by integrating autoregressive language models with rectified flow in a streamlined architecture. The authors found that rectified flow can be easily trained within the existing large language model framework without needing complicated changes. They also improved the model's performance by separating the understanding and generation processes and aligning their outputs during training. This allows JanusFlow to excel in both understanding images and generating them, outperforming many specialized models.
Why it matters?
This research is important because it simplifies the way machines can understand and create images, making it easier to develop applications in areas like artificial intelligence, computer vision, and creative content generation. By enhancing the efficiency of these models, JanusFlow could lead to better tools for tasks such as image editing, automated content creation, and more interactive AI systems.
Abstract
We present JanusFlow, a powerful framework that unifies image understanding and generation in a single model. JanusFlow introduces a minimalist architecture that integrates autoregressive language models with rectified flow, a state-of-the-art method in generative modeling. Our key finding demonstrates that rectified flow can be straightforwardly trained within the large language model framework, eliminating the need for complex architectural modifications. To further improve the performance of our unified model, we adopt two key strategies: (i) decoupling the understanding and generation encoders, and (ii) aligning their representations during unified training. Extensive experiments show that JanusFlow achieves comparable or superior performance to specialized models in their respective domains, while significantly outperforming existing unified approaches across standard benchmarks. This work represents a step toward more efficient and versatile vision-language models.