< Explain other AI papers

Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs

Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkinson, Hany Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav Chaudhary, Congcong Chen, Dong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen, Yen-Chun Chen, Yi-ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, Mei Gao, Min Gao, Amit Garg

2025-03-04

Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language
  Models via Mixture-of-LoRAs

Summary

This paper talks about Phi-4-Mini and Phi-4-Multimodal, two new AI models that are small but very powerful. Phi-4-Mini focuses on text tasks like math and coding, while Phi-4-Multimodal can also handle images and speech. Both models use innovative techniques to be efficient and accurate.

What's the problem?

AI models are getting bigger and more complex, which makes them harder to use on regular computers or devices with limited memory. At the same time, many models struggle to perform well across multiple tasks, especially when combining text, images, and speech.

What's the solution?

The researchers created Phi-4-Mini, a compact model with only 3.8 billion parameters that still performs as well as much larger models on tasks like math and coding. They also developed Phi-4-Multimodal, which uses a method called 'Mixture of LoRAs' to add the ability to process images and speech without interfering with its text capabilities. These models were trained on high-quality data and designed to work efficiently even on devices with less computing power.

Why it matters?

This matters because it shows that smaller AI models can still be very powerful and versatile. By making these models efficient and capable of handling multiple types of input, they could be used in more places, like smartphones or other devices, without needing huge amounts of computing power. This could make advanced AI more accessible for everyday tasks and applications.

Abstract

We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language model trained on high-quality web and synthetic data, significantly outperforming recent open-source models of similar size and matching the performance of models twice its size on math and coding tasks requiring complex reasoning. This achievement is driven by a carefully curated synthetic data recipe emphasizing high-quality math and coding datasets. Compared to its predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of 200K tokens to better support multilingual applications, as well as group query attention for more efficient long-sequence generation. Phi-4-Multimodal is a multimodal model that integrates text, vision, and speech/audio input modalities into a single model. Its novel modality extension approach leverages LoRA adapters and modality-specific routers to allow multiple inference modes combining various modalities without interference. For example, it now ranks first in the OpenASR leaderboard to date, although the LoRA component of the speech/audio modality has just 460 million parameters. Phi-4-Multimodal supports scenarios involving (vision + language), (vision + speech), and (speech/audio) inputs, outperforming larger vision-language and speech-language models on a wide range of tasks. Additionally, we experiment to further train Phi-4-Mini to enhance its reasoning capabilities. Despite its compact 3.8-billion-parameter size, this experimental version achieves reasoning performance on par with or surpassing significantly larger models, including DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.