GrandCode: Achieving Grandmaster Level in Competitive Programming via Agentic Reinforcement Learning
DeepReinforce Team, Xiaoya Li, Xiaofei Sun, Guoyin Wang, Songqiao Su, Chris Shum, Jiwei Li
2026-04-06
Summary
This paper introduces GrandCode, a new AI system that's really good at competitive programming, a field where humans have traditionally outperformed AI.
What's the problem?
Competitive programming is one of the few coding areas where the best human programmers still beat the best AI. Existing AI systems, even advanced ones like Google’s Gemini, haven’t been able to consistently achieve top rankings in live coding competitions. The challenge lies in creating an AI that can not only write code but also strategize, debug, and adapt like a human programmer during a fast-paced contest.
What's the solution?
The researchers developed GrandCode, which uses a team of different AI 'agents' working together. These agents handle tasks like coming up with ideas, writing the code, creating test cases, and summarizing the problem. Importantly, they used a new technique called Agentic GRPO to help these agents learn and improve, even when the rewards for good solutions are delayed and the AI's actions might not perfectly match the training data. This allows the system to learn from its mistakes and refine its approach over time.
Why it matters?
GrandCode is a big deal because it's the first AI system to consistently win against human programmers in live competitive programming contests. This shows that AI has reached a level where it can surpass even the most skilled human coders in complex, real-time problem-solving, which has implications for the future of software development and AI capabilities.
Abstract
Competitive programming remains one of the last few human strongholds in coding against AI. The best AI system to date still underperforms the best humans competitive programming: the most recent best result, Google's Gemini~3 Deep Think, attained 8th place even not being evaluated under live competition conditions. In this work, we introduce GrandCode, a multi-agent RL system designed for competitive programming. The capability of GrandCode is attributed to two key factors: (1) It orchestrates a variety of agentic modules (hypothesis proposal, solver, test generator, summarization, etc) and jointly improves them through post-training and online test-time RL; (2) We introduce Agentic GRPO specifically designed for multi-stage agent rollouts with delayed rewards and the severe off-policy drift that is prevalent in agentic RL. GrandCode is the first AI system that consistently beats all human participants in live contests of competitive programming: in the most recent three Codeforces live competitions, i.e., Round~1087 (Mar 21, 2026), Round~1088 (Mar 28, 2026), and Round~1089 (Mar 29, 2026), GrandCode placed first in all of them, beating all human participants, including legendary grandmasters. GrandCode shows that AI systems have reached a point where they surpass the strongest human programmers on the most competitive coding tasks.