< Explain other AI papers

Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition

Zhisheng Zhong, Chengyao Wang, Yuqi Liu, Senqiao Yang, Longxiang Tang, Yuechen Zhang, Jingyao Li, Tianyuan Qu, Yanwei Li, Yukang Chen, Shaozuo Yu, Sitong Wu, Eric Lo, Shu Liu, Jiaya Jia

2024-12-13

Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition

Summary

This paper introduces Lyra, a new type of AI model that focuses on understanding and interacting with speech, sound, and other types of data in real-time, making it more efficient and versatile than previous models.

What's the problem?

As AI models become more advanced, they need to handle multiple types of information at once, like speech and images. However, many existing models do not effectively integrate speech with other data types, limiting their ability to understand and respond in complex situations. Additionally, relying on large amounts of data for training can be costly and inefficient.

What's the solution?

Lyra addresses these challenges by using three main strategies: 1) It builds on existing powerful models to reduce the amount of new training data needed. 2) It employs a special technique to improve how speech interacts with other data types, enhancing overall performance. 3) It creates a large dataset containing 1.5 million samples from different categories (language, vision, audio) and includes 12,000 long speech samples. This allows Lyra to better understand long and complex speech inputs while remaining efficient.

Why it matters?

This research is significant because it improves how AI systems can process and understand multiple types of information simultaneously, especially in speech-related tasks. By making these models more efficient and capable of handling longer contexts, Lyra can significantly enhance applications in areas like virtual assistants, robotics, and any technology that requires real-time interaction with users.

Abstract

As Multi-modal Large Language Models (MLLMs) evolve, expanding beyond single-domain capabilities is essential to meet the demands for more versatile and efficient AI. However, previous omni-models have insufficiently explored speech, neglecting its integration with multi-modality. We introduce Lyra, an efficient MLLM that enhances multimodal abilities, including advanced long-speech comprehension, sound understanding, cross-modality efficiency, and seamless speech interaction. To achieve efficiency and speech-centric capabilities, Lyra employs three strategies: (1) leveraging existing open-source large models and a proposed multi-modality LoRA to reduce training costs and data requirements; (2) using a latent multi-modality regularizer and extractor to strengthen the relationship between speech and other modalities, thereby enhancing model performance; and (3) constructing a high-quality, extensive dataset that includes 1.5M multi-modal (language, vision, audio) data samples and 12K long speech samples, enabling Lyra to handle complex long speech inputs and achieve more robust omni-cognition. Compared to other omni-methods, Lyra achieves state-of-the-art performance on various vision-language, vision-speech, and speech-language benchmarks, while also using fewer computational resources and less training data.