OpenGame: Open Agentic Coding for Games
Yilei Jiang, Jinyuan Hu, Qianyin Xiao, Yaozhi Zheng, Ruize Ma, Kaituo Feng, Jiaming Han, Tianshuo Peng, Kaixuan Fan, Manyuan Zhang, Xiangyu Yue
2026-04-21
Summary
This paper introduces OpenGame, a new system designed to allow AI agents to create playable web games from start to finish, something current AI tools struggle with.
What's the problem?
While AI is getting good at writing small pieces of code, it consistently fails when trying to build a complete game. This is because games require many different files to work together perfectly, and AI often makes mistakes that cause these files to conflict with each other or create games that don't actually work as intended, like having broken connections between game elements or illogical gameplay.
What's the solution?
The researchers created OpenGame, which uses a specialized AI model called GameCoder-27B. This AI was trained specifically on game development. OpenGame also includes two key skills: one that builds a solid starting structure for a game based on past experience, and another that systematically finds and fixes errors as the game is built. They also created a way to automatically test and score how well the AI-generated games work, looking at things like whether the game builds correctly, how easy it is to understand visually, and if it actually follows the original instructions.
Why it matters?
This work is important because it moves AI beyond just helping with individual coding tasks and towards being able to build complex, interactive applications like video games. If successful, this could lead to AI tools that can help anyone create their own games or other interactive experiences without needing to be an expert programmer.
Abstract
Game development sits at the intersection of creative design and intricate software engineering, demanding the joint orchestration of game engines, real-time loops, and tightly coupled state across many files. While Large Language Models (LLMs) and code agents now solve isolated programming tasks with ease, they consistently stumble when asked to produce a fully playable game from a high-level design, collapsing under cross-file inconsistencies, broken scene wiring, and logical incoherence. We bridge this gap with OpenGame, the first open-source agentic framework explicitly designed for end-to-end web game creation. At its core lies Game Skill, a reusable, evolving capability composed of a Template Skill that grows a library of project skeletons from experience and a Debug Skill that maintains a living protocol of verified fixes - together enabling the agent to scaffold stable architectures and systematically repair integration errors rather than patch isolated syntax bugs. Powering this framework is GameCoder-27B, a code LLM specialized for game engine mastery through a three-stage pipeline of continual pre-training, supervised fine-tuning, and execution-grounded reinforcement learning. Since verifying interactive playability is fundamentally harder than checking static code, we further introduce OpenGame-Bench, an evaluation pipeline that scores agentic game generation along Build Health, Visual Usability, and Intent Alignment via headless browser execution and VLM judging. Across 150 diverse game prompts, OpenGame establishes a new state-of-the-art. We hope OpenGame pushes code agents beyond discrete software engineering problems and toward building complex, interactive real-world applications. Our framework will be fully open-sourced.