mSCoRe: a Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning
Nghia Trung Ngo, Franck Dernoncourt, Thien Huu Nguyen
2025-08-21
Summary
This paper introduces a new benchmark called mSCoRe designed to test how well advanced AI language models can understand and reason about everyday knowledge in different languages and cultures, finding that current models still struggle with this complex task.
What's the problem?
It's not fully understood how powerful AI language models, especially those trained to be better at reasoning, actually use different thinking skills when trying to understand everyday knowledge across various languages and cultures. This gap makes it hard to improve these AI models for global understanding.
What's the solution?
The researchers created mSCoRe, a benchmark with three main parts: a detailed way to categorize different reasoning skills for closer analysis, a smart system for generating test data specifically for commonsense knowledge, and a method to make the tests harder as AI models get better, ensuring it stays a relevant challenge.
Why it matters?
This research is important because it provides a way to accurately measure and understand the limitations of current AI in handling diverse, everyday knowledge, which is crucial for developing AI that can truly understand and interact with people from all over the world and in different cultural contexts.
Abstract
Recent advancements in reasoning-reinforced Large Language Models (LLMs) have shown remarkable capabilities in complex reasoning tasks. However, the mechanism underlying their utilization of different human reasoning skills remains poorly investigated, especially for multilingual commonsense reasoning that involves everyday knowledge across different languages and cultures. To address this gap, we propose a Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning (mSCoRe). Our benchmark incorporates three key components that are designed to systematically evaluate LLM's reasoning capabilities, including: (1) a novel taxonomy of reasoning skills that enables fine-grained analysis of models' reasoning processes, (2) a robust data synthesis pipeline tailored specifically for commonsense reasoning evaluation, and (3) a complexity scaling framework allowing task difficulty to scale dynamically alongside future improvements in LLM abilities. Extensive experiments on eights state-of-the-art LLMs of varying sizes and training approaches demonstrate that mSCoRe remains significantly challenging for current models, particularly at higher complexity levels. Our results reveal the limitations of such reasoning-reinforced models when confronted with nuanced multilingual general and cultural commonsense. We further provide detailed analysis on the models' reasoning processes, suggesting future directions for improving multilingual commonsense reasoning capabilities.