< Explain other AI papers

Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation

Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchisio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, Enzo Ferrante, Marzieh Fadaee, Beyza Ermis

2024-12-06

Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation

Summary

This paper talks about Global MMLU, a new evaluation tool designed to address cultural and linguistic biases in multilingual datasets, making it easier to assess how well language models perform across different cultures and languages.

What's the problem?

Multilingual datasets often contain cultural biases that make them less effective for evaluating language models. These biases can come from the need for specific cultural knowledge to understand questions, which limits the usefulness of translated datasets. Additionally, simply translating questions can introduce errors that change their meaning, leading to unfair evaluations of model performance.

What's the solution?

The authors created Global MMLU, an improved version of existing multilingual datasets that covers 42 languages and addresses these biases. They engaged professional and community annotators to ensure high-quality translations and evaluated the cultural biases present in the original dataset. Global MMLU includes subsets of questions that are labeled as culturally sensitive or culturally neutral, allowing for a more comprehensive evaluation of language models.

Why it matters?

This research is important because it helps create fairer evaluations for language models by considering cultural and linguistic differences. By improving how we assess these models, Global MMLU can lead to better AI systems that understand and respect diverse cultures, ultimately making technology more inclusive and effective for users around the world.

Abstract

Cultural biases in multilingual datasets pose significant challenges for their effectiveness as global benchmarks. These biases stem not only from language but also from the cultural knowledge required to interpret questions, reducing the practical utility of translated datasets like MMLU. Furthermore, translation often introduces artifacts that can distort the meaning or clarity of questions in the target language. A common practice in multilingual evaluation is to rely on machine-translated evaluation sets, but simply translating a dataset is insufficient to address these challenges. In this work, we trace the impact of both of these issues on multilingual evaluations and ensuing model performances. Our large-scale evaluation of state-of-the-art open and proprietary models illustrates that progress on MMLU depends heavily on learning Western-centric concepts, with 28% of all questions requiring culturally sensitive knowledge. Moreover, for questions requiring geographic knowledge, an astounding 84.9% focus on either North American or European regions. Rankings of model evaluations change depending on whether they are evaluated on the full portion or the subset of questions annotated as culturally sensitive, showing the distortion to model rankings when blindly relying on translated MMLU. We release Global-MMLU, an improved MMLU with evaluation coverage across 42 languages -- with improved overall quality by engaging with compensated professional and community annotators to verify translation quality while also rigorously evaluating cultural biases present in the original dataset. This comprehensive Global-MMLU set also includes designated subsets labeled as culturally sensitive and culturally agnostic to allow for more holistic, complete evaluation.