Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment
Zuyan Liu, Yuhao Dong, Jiahui Wang, Ziwei Liu, Winston Hu, Jiwen Lu, Yongming Rao
2025-02-07

Summary
This paper talks about Ola, a new AI model that can understand and process different types of information like text, images, audio, and video. It uses a step-by-step learning method called Progressive Modality Alignment to connect these types of data efficiently.
What's the problem?
Most AI models are specialized for one type of input, like text or images, and struggle to handle multiple types of data at the same time. Even existing multi-modal models often require a lot of computational power and still don't perform as well as specialized models.
What's the solution?
The researchers developed Ola using a strategy called Progressive Modality Alignment. This method starts by training the model on simple connections, like text and images, then gradually adds more complex data like audio and video. By doing this step-by-step, Ola learns to process all these types of information together without needing excessive computing resources. They also added features like real-time speech generation to make the model more interactive.
Why it matters?
This research is important because it moves AI closer to understanding and working with all kinds of information at once, just like humans do. Ola's efficient training method makes it easier to build powerful multi-modal models without requiring huge amounts of data or computing power. This could lead to better AI systems for tasks like education, entertainment, and virtual assistants.
Abstract
Recent advances in large language models, particularly following GPT-4o, have sparked increasing interest in developing omni-modal models capable of understanding more modalities. While some open-source alternatives have emerged, there is still a notable lag behind specialized single-modality models in performance. In this paper, we present Ola, an Omni-modal language model that achieves competitive performance across image, video, and audio understanding compared to specialized counterparts. The core design of Ola lies in its progressive modality alignment strategy that extends the supporting modality of the language model progressively. Our training pipeline begins with the most distinct modalities: image and text, then gradually expands the skill sets of the model using speech data that connects language and audio knowledge, and video data that connects all modalities. The progressive learning pipeline also enables us to maintain a relatively small size of the cross-modal alignment data, making developing omni-modal from existing vision-language models easy and less costly. Moreover, to unlock an advanced interactive experience like GPT-4o, we further design a sentence-wise decoding solution for streaming speech generation. Extensive experiments demonstrate that Ola surpasses existing open omni-modal LLMs across all modalities while achieving highly competitive performance compared to state-of-the-art specialized models of similar sizes. We aim to make Ola a fully open omni-modal understanding solution to advance future research in this emerging field. Model weights, code, and data are open-sourced at https://github.com/Ola-Omni/Ola.