< Explain other AI papers

X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests

Jie Wu, Haoling Li, Xin Zhang, Jiani Guo, Jane Luo, Steven Liu, Yangyu Huang, Ruihang Chu, Scarlett Li, Yujiu Yang

2026-01-13

X-Coder: Advancing Competitive Programming with Fully Synthetic Tasks, Solutions, and Tests

Summary

This paper explores a new way to train computer programs (specifically, large language models for code) by creating all the training data artificially, instead of relying on code written by humans. They show this approach can create surprisingly capable programs.

What's the problem?

Current AI models for writing code are really good, but they need a *lot* of examples to learn from. These examples usually come from existing code online, which isn't ideal because it limits how much the AI can improve and it's hard to create truly challenging problems this way. Essentially, these models are limited by the amount of 'real-world' code available.

What's the solution?

The researchers developed a system called SynthSmith that automatically *generates* coding problems, solutions, and even tests to check if the solutions are correct. This means they don't need to rely on human-written code. They then used this generated data to train a new model, called X-Coder, using two main techniques: first, showing the model examples (supervised fine-tuning), and second, rewarding the model for correct solutions (reinforcement learning). They also investigated how to best scale up the amount of synthetic data.

Why it matters?

This work is important because it shows that we can build powerful code-writing AI without being limited by the availability of human-written code. This opens the door to creating more advanced and scalable AI systems for programming, and it helps us understand how to best train these models using artificial data. It also suggests that focusing on the *quality* of the generated data and using a step-by-step training process are key to success.

Abstract

Competitive programming presents great challenges for Code LLMs due to its intensive reasoning demands and high logical complexity. However, current Code LLMs still rely heavily on real-world data, which limits their scalability. In this paper, we explore a fully synthetic approach: training Code LLMs with entirely generated tasks, solutions, and test cases, to empower code reasoning models without relying on real-world data. To support this, we leverage feature-based synthesis to propose a novel data synthesis pipeline called SynthSmith. SynthSmith shows strong potential in producing diverse and challenging tasks, along with verified solutions and tests, supporting both supervised fine-tuning and reinforcement learning. Based on the proposed synthetic SFT and RL datasets, we introduce the X-Coder model series, which achieves a notable pass rate of 62.9 avg@8 on LiveCodeBench v5 and 55.8 on v6, outperforming DeepCoder-14B-Preview and AReal-boba2-14B despite having only 7B parameters. In-depth analysis reveals that scaling laws hold on our synthetic dataset, and we explore which dimensions are more effective to scale. We further provide insights into code-centric reinforcement learning and highlight the key factors that shape performance through detailed ablations and analysis. Our findings demonstrate that scaling high-quality synthetic data and adopting staged training can greatly advance code reasoning, while mitigating reliance on real-world coding data.