VINO: A Unified Visual Generator with Interleaved OmniModal Context
Junyi Chen, Tong He, Zhoujie Fu, Pengfei Wan, Kun Gai, Weicai Ye
2026-01-06
Summary
This paper introduces VINO, a new artificial intelligence model that can create and edit both images and videos using a single system, unlike previous models that needed separate tools for each task.
What's the problem?
Existing AI models for generating or editing images and videos are usually built for one specific task or type of media. This means if you wanted to both create a video from text and edit an existing image, you’d need different, specialized AI systems. This is inefficient and makes it hard to have consistent results across different visual tasks.
What's the solution?
The researchers created VINO by combining a vision-language model with a special type of transformer called a Multimodal Diffusion Transformer. Essentially, VINO takes text, images, and videos as input, turns them into a common format, and then uses this information to guide the creation or editing process. They also developed a training method that starts with a video generation model and gradually expands its abilities to handle images as well, creating a truly unified system.
Why it matters?
VINO represents a step towards more versatile and scalable AI for visual content creation. Having one model that can handle multiple tasks simplifies the process and allows for more complex edits and creations, like maintaining a consistent style or character across both images and videos. It shows that a single, flexible system can be a powerful foundation for future AI tools that can generate any kind of visual content.
Abstract
We present VINO, a unified visual generator that performs image and video generation and editing within a single framework. Instead of relying on task-specific models or independent modules for each modality, VINO uses a shared diffusion backbone that conditions on text, images and videos, enabling a broad range of visual creation and editing tasks under one model. Specifically, VINO couples a vision-language model (VLM) with a Multimodal Diffusion Transformer (MMDiT), where multimodal inputs are encoded as interleaved conditioning tokens, and then used to guide the diffusion process. This design supports multi-reference grounding, long-form instruction following, and coherent identity preservation across static and dynamic content, while avoiding modality-specific architectural components. To train such a unified system, we introduce a multi-stage training pipeline that progressively expands a video generation base model into a unified, multi-task generator capable of both image and video input and output. Across diverse generation and editing benchmarks, VINO demonstrates strong visual quality, faithful instruction following, improved reference and attribute preservation, and more controllable multi-identity edits. Our results highlight a practical path toward scalable unified visual generation, and the promise of interleaved, in-context computation as a foundation for general-purpose visual creation.