Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
Inclusion AI, Bowen Ma, Cheng Zou, Canxiang Yan, Chunxiang Jin, Chunjie Shen, Dandan Zheng, Fudong Wang, Furong Xu, GuangMing Yao, Jun Zhou, Jingdong Chen, Jianing Li, Jianxin Sun, Jiajia Liu, Jianjiang Zhu, Jianping Jiang, Jun Peng, Kaixiang Ji, Kaimeng Ren, Libin Wang, Lixiang Ru
2025-10-30
Summary
This paper introduces Ming-Flash-Omni, a new and improved AI model that's really good at handling different types of information – text, images, and speech – all at the same time.
What's the problem?
Existing AI models often struggle to efficiently handle a huge amount of data and perform well across multiple types of tasks like understanding speech, recognizing images, and generating text. Building a single AI that excels at everything is a major challenge, and making these powerful models usable without requiring massive computing resources is also difficult.
What's the solution?
The researchers created Ming-Flash-Omni, which is built using a special technique called a 'Mixture-of-Experts' that allows it to be very large (100 billion parameters) but only use a small portion of its 'brain' (6.1 billion parameters) for any given task. This makes it faster and more efficient. They then trained this model on lots of different data to improve its abilities in speech recognition, image generation, and image editing, adding a new feature called 'generative segmentation' to help with image control.
Why it matters?
This work is important because it represents a significant step towards creating more general-purpose AI – AI that can do many different things well, much like a human. The improvements in efficiency mean these powerful models can become more accessible, and the advancements in multimodal understanding bring us closer to AI systems that can truly understand and interact with the world around us in a comprehensive way.
Abstract
We propose Ming-Flash-Omni, an upgraded version of Ming-Omni, built upon a sparser Mixture-of-Experts (MoE) variant of Ling-Flash-2.0 with 100 billion total parameters, of which only 6.1 billion are active per token. This architecture enables highly efficient scaling (dramatically improving computational efficiency while significantly expanding model capacity) and empowers stronger unified multimodal intelligence across vision, speech, and language, representing a key step toward Artificial General Intelligence (AGI). Compared to its predecessor, the upgraded version exhibits substantial improvements across multimodal understanding and generation. We significantly advance speech recognition capabilities, achieving state-of-the-art performance in contextual ASR and highly competitive results in dialect-aware ASR. In image generation, Ming-Flash-Omni introduces high-fidelity text rendering and demonstrates marked gains in scene consistency and identity preservation during image editing. Furthermore, Ming-Flash-Omni introduces generative segmentation, a capability that not only achieves strong standalone segmentation performance but also enhances spatial control in image generation and improves editing consistency. Notably, Ming-Flash-Omni achieves state-of-the-art results in text-to-image generation and generative segmentation, and sets new records on all 12 contextual ASR benchmarks, all within a single unified architecture.