Diffusion Models Are Real-Time Game Engines
Dani Valevski, Yaniv Leviathan, Moab Arar, Shlomi Fruchter
2024-08-28

Summary
This paper introduces GameNGen, a new game engine that uses neural networks to create real-time interactions in video games, specifically demonstrating its capabilities with the classic game DOOM.
What's the problem?
Creating realistic and interactive video games usually requires complex programming and powerful hardware. Traditional game engines can struggle to generate high-quality graphics quickly, especially when simulating detailed environments or actions.
What's the solution?
GameNGen solves this problem by using a neural model that can predict the next frame of the game based on what has happened previously. It first trains a reinforcement learning agent to play the game, recording its actions, and then uses a diffusion model to generate new frames. This method allows GameNGen to run smoothly at over 20 frames per second on a single TPU (Tensor Processing Unit), making it efficient for real-time gameplay.
Why it matters?
This research is significant because it shows how advanced AI techniques can be applied to video game development, potentially leading to more immersive and dynamic gaming experiences. By leveraging neural networks, GameNGen opens up new possibilities for creating games that are not only visually impressive but also responsive to player actions in real time.
Abstract
We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.