AMO-Bench: Large Language Models Still Struggle in High School Math Competitions
Shengnan An, Xunliang Cai, Xuezhi Cao, Xiaoyu Li, Yehao Lin, Junlin Liu, Xinxuan Lv, Dan Ma, Xuanlin Wang, Ziwen Wang, Shuang Zhou
2025-10-31
Summary
This paper introduces a new, very difficult math benchmark called AMO-Bench designed to test the mathematical reasoning skills of advanced AI language models.
What's the problem?
Current benchmarks used to evaluate how well AI can do math are becoming too easy for the most powerful models. These models are starting to get almost perfect scores, which doesn't really show if they *understand* the math or are just memorizing answers or patterns from the training data. The researchers needed a way to truly challenge these AI systems with problems they haven't seen before.
What's the solution?
To solve this, the researchers created 50 completely original math problems that are as hard or harder than those found in the International Mathematical Olympiad, a very prestigious math competition. They had math experts check all the problems to make sure they were truly challenging. Importantly, the benchmark only asks for the final answer to each problem, making it easy to automatically grade the AI's responses without needing to check complex step-by-step solutions.
Why it matters?
The results show that even the best AI models struggle with these problems, getting only around 52% correct. This highlights that there's still a lot of room for improvement in AI's ability to reason mathematically. The benchmark itself, AMO-Bench, is released publicly so other researchers can use it to develop and test new AI models and push the boundaries of what's possible in mathematical AI.
Abstract
We present AMO-Bench, an Advanced Mathematical reasoning benchmark with Olympiad level or even higher difficulty, comprising 50 human-crafted problems. Existing benchmarks have widely leveraged high school math competitions for evaluating mathematical reasoning capabilities of large language models (LLMs). However, many existing math competitions are becoming less effective for assessing top-tier LLMs due to performance saturation (e.g., AIME24/25). To address this, AMO-Bench introduces more rigorous challenges by ensuring all 50 problems are (1) cross-validated by experts to meet at least the International Mathematical Olympiad (IMO) difficulty standards, and (2) entirely original problems to prevent potential performance leakages from data memorization. Moreover, each problem in AMO-Bench requires only a final answer rather than a proof, enabling automatic and robust grading for evaluation. Experimental results across 26 LLMs on AMO-Bench show that even the best-performing model achieves only 52.4% accuracy on AMO-Bench, with most LLMs scoring below 40%. Beyond these poor performances, our further analysis reveals a promising scaling trend with increasing test-time compute on AMO-Bench. These results highlight the significant room for improving the mathematical reasoning in current LLMs. We release AMO-Bench to facilitate further research into advancing the reasoning abilities of language models. https://amo-bench.github.io/