LongCat-Flash-Omni Technical Report
Meituan LongCat Team, Bairui Wang, Bayan, Bin Xiao, Bo Zhang, Bolin Rong, Borun Chen, Chang Wan, Chao Zhang, Chen Huang, Chen Chen, Chen Chen, Chengxu Yang, Chengzuo Yang, Cong Han, Dandan Peng, Delian Ruan, Detai Xin, Disong Wang, Dongchao Yang, Fanfan Liu, Fengjiao Chen
2025-11-04
Summary
This paper introduces LongCat-Flash-Omni, a really powerful new artificial intelligence model that can understand and interact with different types of information at the same time, like text, images, audio, and video.
What's the problem?
Existing AI models often struggle to effectively combine information from multiple sources – like understanding what someone is saying *and* seeing their facial expressions simultaneously. Building a model that can do this well, especially one that responds quickly, is a huge challenge because it requires a lot of computing power and clever ways to organize the data and the model itself.
What's the solution?
The researchers created LongCat-Flash-Omni, a massive model with 560 billion adjustable parameters. They trained it in stages, starting with simpler tasks and gradually increasing the complexity to help it learn how to process different types of data together. They also used a special architecture called a Mixture-of-Experts to make the model more efficient, and developed a new method for splitting up the work during training to handle the different types of data effectively. This allows it to process information quickly, even with its huge size.
Why it matters?
This work is important because it pushes the boundaries of what AI can do. LongCat-Flash-Omni achieves top performance among openly available models in understanding and interacting with multiple types of data, and it does so in real-time. By releasing the model to the public, the researchers hope to encourage further innovation in the field of multimodal AI, leading to more sophisticated and versatile AI systems in the future.
Abstract
We introduce LongCat-Flash-Omni, a state-of-the-art open-source omni-modal model with 560 billion parameters, excelling at real-time audio-visual interaction. By adopting a curriculum-inspired progressive training strategy that transitions from simpler to increasingly complex modality sequence modeling tasks, LongCat-Flash-Omni attains comprehensive multimodal capabilities while maintaining strong unimodal capability. Building upon LongCat-Flash, which adopts a high-performance Shortcut-connected Mixture-of-Experts (MoE) architecture with zero-computation experts, LongCat-Flash-Omni integrates efficient multimodal perception and speech reconstruction modules. Despite its immense size of 560B parameters (with 27B activated), LongCat-Flash-Omni achieves low-latency real-time audio-visual interaction. For training infrastructure, we developed a modality-decoupled parallelism scheme specifically designed to manage the data and model heterogeneity inherent in large-scale multimodal training. This innovative approach demonstrates exceptional efficiency by sustaining over 90% of the throughput achieved by text-only training. Extensive evaluations show that LongCat-Flash-Omni achieves state-of-the-art performance on omni-modal benchmarks among open-source models. Furthermore, it delivers highly competitive results across a wide range of modality-specific tasks, including text, image, and video understanding, as well as audio understanding and generation. We provide a comprehensive overview of the model architecture design, training procedures, and data strategies, and open-source the model to foster future research and development in the community.