Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models
Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, Baobao Chang
2024-10-15

Summary
This paper presents Omni-MATH, a new benchmark designed to test large language models (LLMs) on advanced mathematical reasoning problems at the level of math competitions, like Olympiads.
What's the problem?
While recent LLMs have shown great improvements in solving math problems, existing benchmarks (like GSM8K and MATH) are becoming too easy for these models, as they can achieve very high accuracy. This means that these benchmarks are no longer challenging enough to evaluate the true reasoning abilities of the models.
What's the solution?
To fill this gap, the authors created Omni-MATH, which includes 4,428 difficult math problems specifically designed for competition-level reasoning. These problems are carefully categorized into over 33 different topics and cover various difficulty levels. The benchmark allows for a more thorough assessment of how well LLMs can handle complex mathematical tasks. The results showed that even the best models struggled with these challenging problems, achieving only about 60% accuracy.
Why it matters?
This research is important because it sets a higher standard for evaluating AI's mathematical abilities. By focusing on Olympiad-level problems, Omni-MATH can help researchers better understand the limitations of current models and encourage further advancements in AI's reasoning capabilities.
Abstract
Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To bridge this gap, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems with rigorous human annotation. These problems are meticulously categorized into over 33 sub-domains and span more than 10 distinct difficulty levels, enabling a holistic assessment of model performance in Olympiad-mathematical reasoning. Furthermore, we conducted an in-depth analysis based on this benchmark. Our experimental results show that even the most advanced models, OpenAI o1-mini and OpenAI o1-preview, struggle with highly challenging Olympiad-level problems, with 60.54% and 52.55% accuracy, highlighting significant challenges in Olympiad-level mathematical reasoning.