< Explain other AI papers

MMTEB: Massive Multilingual Text Embedding Benchmark

Kenneth Enevoldsen, Isaac Chung, Imene Kerboua, Márton Kardos, Ashwin Mathur, David Stap, Jay Gala, Wissam Siblini, Dominik Krzemiński, Genta Indra Winata, Saba Sturua, Saiteja Utpala, Mathieu Ciancone, Marion Schaeffer, Gabriel Sequeira, Diganta Misra, Shreeya Dhakal, Jonathan Rystrøm, Roman Solomatin, Ömer Çağatan, Akash Kundu, Martin Bernstorff

2025-02-20

MMTEB: Massive Multilingual Text Embedding Benchmark

Summary

This paper talks about MMTEB, a massive benchmark designed to test how well AI models understand and work with text in over 250 languages. It helps evaluate these models on a wide variety of tasks, including following instructions, retrieving information, and even working with code.

What's the problem?

Current benchmarks for testing AI models are limited because they only focus on a small number of tasks and languages. This makes it hard to know how well these models perform in real-world scenarios, especially for languages that don’t have a lot of data available or for complex tasks that require more advanced understanding.

What's the solution?

The researchers created MMTEB, which includes over 500 tasks across 250+ languages. They also introduced ways to make testing more efficient by reducing the amount of data needed while still keeping the results accurate. They tested different AI models using this benchmark and found that smaller, specialized models sometimes perform better than larger ones in multilingual settings.

Why it matters?

This matters because it provides a better way to evaluate AI models on a global scale, ensuring they work well for many languages and tasks. By making testing more efficient and accessible, MMTEB can help improve AI technologies for diverse applications like translation, search engines, and content understanding, especially in underrepresented languages.

Abstract

Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is <PRE_TAG>multilingual-e5-large-instruct</POST_TAG> with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost.