FrontierCS: Evolving Challenges for Evolving Intelligence
Qiuyang Mang, Wenhao Chai, Zhifei Li, Huanzhi Mao, Shang Zhou, Alexander Du, Hanchen Li, Shu Liu, Edwin Chen, Yichuan Wang, Xieting Chu, Zerui Cheng, Yuan Xu, Tian Xia, Zirui Wang, Tianneng Shi, Jianzhu Yao, Yilong Zhao, Qizheng Zhang, Charlie Ruan, Zeyu Shen, Kaiyuan Liu
2025-12-18
Summary
This paper introduces a new way to test how well AI models can actually *do* computer science, not just answer questions about it. It's called FrontierCS and it presents AI with real, open-ended programming challenges.
What's the problem?
Existing AI benchmarks usually give problems with one right answer that can be easily checked. This doesn't really test if an AI can *think* like a computer scientist and come up with a good solution to a complex problem where there isn't one single perfect answer. Current tests often focus on getting *an* answer, not a *good* answer.
What's the solution?
The researchers created FrontierCS, a collection of 156 challenging computer science problems. These problems aren't simple; they're the kind that even experts struggle with. The AI doesn't just give an answer, it has to write and run actual code. Experts designed the problems and also created ways to automatically check how well the AI's code performs, giving partial credit for good but not perfect solutions. They then tested several AI models on these problems.
Why it matters?
This benchmark is important because it pushes AI beyond simply memorizing facts or finding known solutions. It forces AI to demonstrate genuine problem-solving skills in computer science, like designing algorithms and systems. The results show that current AI models still have a long way to go to match human experts, and simply giving them more 'thinking time' isn't enough to close the gap.
Abstract
We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.