< Explain other AI papers

Scaling Laws for Native Multimodal Models Scaling Laws for Native Multimodal Models

Mustafa Shukor, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord, Joshua Susskind, Alaaeldin El-Nouby

2025-04-11

Scaling Laws for Native Multimodal Models Scaling Laws for Native
  Multimodal Models

Summary

This paper talks about how to build AI models that can understand and work with different types of information, like text, images, and sounds, all at once. It studies the best ways to design these 'multimodal' models by looking at how their performance changes as they get bigger and use different training methods.

What's the problem?

The problem is that most current multimodal AI systems are built by sticking together separate parts that were each trained for just one type of data, like a vision model for images and a language model for text. While this can work well, it's not clear if this method is actually the best, especially as these models get larger and more complex. There hasn't been enough research on whether it's better to train a single model from scratch on all types of data, or to keep combining different specialized models.

What's the solution?

To answer this, the researchers trained hundreds of models using different designs and training mixes, comparing models that combine data early in the process with those that combine it later. They found that models which mix all the data types together from the start (early-fusion) actually work better when the models are smaller, are easier to train, and are more efficient to use. They also showed that adding a technique called Mixture of Experts, which lets the model focus on the most relevant information for each type of data, makes these early-fusion models even stronger.

Why it matters?

This matters because it helps AI researchers and engineers figure out the best way to build future AI systems that can understand the world more like humans do, by combining information from different sources. By showing that early-fusion models can be more efficient and powerful, this research could lead to smarter, faster, and more practical AI for things like virtual assistants, healthcare, and self-driving cars.

Abstract

Building general-purpose models that can effectively perceive the world through multimodal signals has been a long-standing goal. Current approaches involve integrating separately pre-trained components, such as connecting vision encoders to LLMs and continuing multimodal training. While such approaches exhibit remarkable sample efficiency, it remains an open question whether such late-fusion architectures are inherently superior. In this work, we revisit the architectural design of native multimodal models (NMMs)--those trained from the ground up on all modalities--and conduct an extensive scaling laws study, spanning 457 trained models with different architectures and training mixtures. Our investigation reveals no inherent advantage to late-fusion architectures over early-fusion ones, which do not rely on image encoders. On the contrary, early-fusion exhibits stronger performance at lower parameter counts, is more efficient to train, and is easier to deploy. Motivated by the strong performance of the early-fusion architectures, we show that incorporating Mixture of Experts (MoEs) allows for models that learn modality-specific weights, significantly enhancing performance.