< Explain other AI papers

Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning

Haiteng Zhao, Junhao Shen, Yiming Zhang, Songyang Gao, Kuikun Liu, Tianyou Ma, Fan Zheng, Dahua Lin, Wenwei Zhang, Kai Chen

2025-12-12

Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning

Summary

This paper introduces InternGeometry, a new AI system built using a large language model that can solve very difficult geometry problems, even those at the International Mathematical Olympiad (IMO) level.

What's the problem?

While large language models are good at math, they struggle with geometry because they aren't very good at figuring out *how* to approach a geometry problem – specifically, what extra lines or shapes to add to help solve it. Current top-performing AI for geometry, like AlphaGeometry 2, need huge amounts of example problems to learn from, making them resource intensive.

What's the solution?

The researchers created InternGeometry, which works by constantly suggesting potential solutions and extra geometric elements, then checking if those suggestions are valid using a computer program that understands geometry rules. It learns from the feedback it gets, and remembers past attempts using a 'dynamic memory' to improve its future suggestions. They also used a training method called 'Complexity-Boosting Reinforcement Learning' which gradually gives the AI harder and harder problems to solve. InternGeometry is built on a powerful language model called InternThinker-32B.

Why it matters?

InternGeometry is important because it can solve IMO geometry problems almost as well as human gold medalists, but it needs dramatically less training data than previous AI systems like AlphaGeometry 2. This shows that large language models have the potential to become really good at complex geometry, and it opens the door for new research in this area. It can even come up with solutions that humans haven't thought of before.

Abstract

Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.