< Explain other AI papers

Global PIQA: Evaluating Physical Commonsense Reasoning Across 100+ Languages and Cultures

Tyler A. Chang, Catherine Arnett, Abdelrahman Eldesokey, Abdelrahman Sadallah, Abeer Kashar, Abolade Daud, Abosede Grace Olanihun, Adamu Labaran Mohammed, Adeyemi Praise, Adhikarinayum Meerajita Sharma, Aditi Gupta, Afitab Iyigun, Afonso Simplício, Ahmed Essouaied, Aicha Chorana, Akhil Eppa, Akintunde Oladipo, Akshay Ramesh, Aleksei Dorkin, Alfred Malengo Kondoro, Alham Fikri Aji, Ali Eren Çetintaş

2025-10-29

Global PIQA: Evaluating Physical Commonsense Reasoning Across 100+ Languages and Cultures

Summary

This paper introduces a new way to test how well large language models, like those powering chatbots, understand common sense across different cultures and languages.

What's the problem?

Currently, there aren't many good tests to see if these language models truly grasp how things work in various cultures around the world. Most tests are designed with a Western perspective, and models can do well on those even if they don't understand everyday life in other places. This means we don't really know how well they perform when dealing with different cultural contexts, especially for languages that aren't widely used online.

What's the solution?

Researchers created a test called Global PIQA, which includes questions about common sense knowledge in over 100 languages. They didn't just translate questions from English; instead, they had 335 people from 65 countries create questions based on *their* own cultures – things like local foods, traditions, and customs. They then tested existing language models on this new test to see how they did.

Why it matters?

The results showed that while models generally do okay overall, they struggle significantly with languages that have fewer online resources. This highlights that these models still need to improve their understanding of everyday knowledge in many cultures. This test isn't just about improving the models themselves; it also helps us appreciate the diversity of human knowledge and how language is tied to culture.

Abstract

To date, there exist almost no culturally-specific evaluation benchmarks for large language models (LLMs) that cover a large number of languages and cultures. In this paper, we present Global PIQA, a participatory commonsense reasoning benchmark for over 100 languages, constructed by hand by 335 researchers from 65 countries around the world. The 116 language varieties in Global PIQA cover five continents, 14 language families, and 23 writing systems. In the non-parallel split of Global PIQA, over 50% of examples reference local foods, customs, traditions, or other culturally-specific elements. We find that state-of-the-art LLMs perform well on Global PIQA in aggregate, but they exhibit weaker performance in lower-resource languages (up to a 37% accuracy gap, despite random chance at 50%). Open models generally perform worse than proprietary models. Global PIQA highlights that in many languages and cultures, everyday knowledge remains an area for improvement, alongside more widely-discussed capabilities such as complex reasoning and expert knowledge. Beyond its uses for LLM evaluation, we hope that Global PIQA provides a glimpse into the wide diversity of cultures in which human language is embedded.