Evaluating Gemini Robotics Policies in a Veo World Simulator
Gemini Robotics Team, Coline Devin, Yilun Du, Debidatta Dwibedi, Ruiqi Gao, Abhishek Jindal, Thomas Kipf, Sean Kirmani, Fangchen Liu, Anirudha Majumdar, Andrew Marmon, Carolina Parada, Yulia Rubanova, Dhruv Shah, Vikas Sindhwani, Jie Tan, Fei Xia, Ted Xiao, Sherry Yang, Wenhao Yu, Allan Zhou
2025-12-12
Summary
This paper explores using advanced video generation models to test and improve robots' ability to perform tasks in a wide range of situations, going beyond just the scenarios they were originally trained on.
What's the problem?
Currently, robots are usually tested in environments very similar to those they learned in. This doesn't tell us how well they'll perform in new, unexpected situations – what's called 'out-of-distribution' generalization. It's also hard to systematically check for safety issues, like a robot bumping into things or misinterpreting instructions, because creating diverse testing scenarios is difficult and expensive.
What's the solution?
The researchers built a system that uses a powerful video model called Veo to create realistic, simulated environments for robots. This system can change things in the environment, like adding new objects or backgrounds, and even show the scene from multiple viewpoints. Importantly, it's designed to understand what actions a robot is taking and make sure the simulation looks consistent from all angles. They then used this system to test existing robot programs in many different scenarios, including ones the robots hadn't seen before.
Why it matters?
This work is important because it provides a way to thoroughly test robots *before* they're deployed in the real world. By simulating a huge variety of situations, including potentially dangerous ones, we can identify weaknesses in robot programs and make them more reliable and safe. This could speed up the development of robots that can handle complex tasks in unpredictable environments, like homes or warehouses.
Abstract
Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.