< Explain other AI papers

MorphoBench: A Benchmark with Difficulty Adaptive to Model Reasoning

Xukai Wang, Xuanbo Liu, Mingrui Chen, Haitian Zhong, Xuanlin Yang, Bohan Zeng, Jinbo Hu, Hao Liang, Junbo Niu, Xuchen Li, Ruitao Wu, Ruichuan An, Yang Shi, Liu Liu, Xu-Yao Zhang, Qiang Liu, Zhouchen Lin, Wentao Zhang, Bin Dong

2025-10-20

MorphoBench: A Benchmark with Difficulty Adaptive to Model Reasoning

Summary

This paper introduces MorphoBench, a new way to test how well artificial intelligence models can reason and solve complex problems.

What's the problem?

Currently, the tests used to check AI reasoning skills aren't very good. They often don't cover enough different types of problems, and they don't get harder as the AI gets better at solving them. This means it's hard to accurately measure an AI's true reasoning ability and identify areas where it still needs improvement.

What's the solution?

The researchers created MorphoBench, which uses a wide variety of challenging questions from sources like academic competitions. What makes it special is that it can automatically adjust the difficulty of the questions. It does this by looking at how the AI tries to solve the problem and making the question more complex based on that. They also use computer simulations to create questions that can change dynamically, keeping the test challenging without needing a lot of extra work. They’ve built a collection of over 1,300 questions and have already used it to test models like o3 and GPT-5, adjusting the difficulty as needed.

Why it matters?

MorphoBench is important because it provides a more thorough and reliable way to evaluate AI reasoning skills. This will help developers build better AI models that are not only smarter but also more dependable and scientifically sound, guiding improvements in how these models think and solve problems.

Abstract

With the advancement of powerful large-scale reasoning models, effectively evaluating the reasoning capabilities of these models has become increasingly important. However, existing benchmarks designed to assess the reasoning abilities of large models tend to be limited in scope and lack the flexibility to adapt their difficulty according to the evolving reasoning capacities of the models. To address this, we propose MorphoBench, a benchmark that incorporates multidisciplinary questions to evaluate the reasoning capabilities of large models and can adjust and update question difficulty based on the reasoning abilities of advanced models. Specifically, we curate the benchmark by selecting and collecting complex reasoning questions from existing benchmarks and sources such as Olympiad-level competitions. Additionally, MorphoBench adaptively modifies the analytical challenge of questions by leveraging key statements generated during the model's reasoning process. Furthermore, it includes questions generated using simulation software, enabling dynamic adjustment of benchmark difficulty with minimal resource consumption. We have gathered over 1,300 test questions and iteratively adjusted the difficulty of MorphoBench based on the reasoning capabilities of models such as o3 and GPT-5. MorphoBench enhances the comprehensiveness and validity of model reasoning evaluation, providing reliable guidance for improving both the reasoning abilities and scientific robustness of large models. The code has been released in https://github.com/OpenDCAI/MorphoBench.