AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale
Yunjie Ji, Xiaoyu Tian, Sitong Zhao, Haotian Wang, Shuaiting Chen, Yiping Peng, Han Zhao, Xiangang Li
2025-05-14

Summary
This paper talks about AM-Thinking-v1, a powerful language model with 32 billion parameters that is really good at solving math problems and writing code, thanks to advanced training methods.
What's the problem?
The problem is that many AI models struggle to reason through complex math or programming tasks, and it's hard to get high performance without using huge, expensive systems that aren't always available to everyone.
What's the solution?
The researchers improved AM-Thinking-v1 by using supervised fine-tuning, where the model learns from examples with correct answers, and reinforcement learning, where it gets better by practicing and getting feedback. This helped the model reach top performance in math and coding, even though it's not the largest model out there.
Why it matters?
This matters because it shows that open-source AI models, which anyone can use, can still be extremely smart and capable without needing massive resources, making advanced technology more accessible to students, teachers, and developers.
Abstract
AM-Thinking-v1, a 32B dense language model, achieves state-of-the-art performance in mathematical and coding tasks by leveraging supervised fine-tuning and reinforcement learning, demonstrating the capabilities of mid-scale open-source models.