RobotArena infty: Scalable Robot Benchmarking via Real-to-Sim Translation
Yash Jangir, Yidi Zhang, Kashu Yamazaki, Chenyu Zhang, Kuan-Hsun Tu, Tsung-Wei Ke, Lei Ke, Yonatan Bisk, Katerina Fragkiadaki
2025-10-28
Summary
This paper introduces a new way to test how well robots can perform a variety of tasks, moving beyond the limitations of testing them in the real world or simple computer simulations.
What's the problem?
Testing robots is really hard. Real-world testing takes a lot of time, effort, and can be dangerous if things go wrong. Current computer simulations aren't great either because robots are trained and tested in the *same* fake environment, so it's hard to know if they'll work in a new situation or with data from real-world examples. Plus, figuring out if a robot is doing a good job often requires a person to watch and judge, which doesn't scale well.
What's the solution?
The researchers created a system that uses videos of humans performing tasks to automatically build realistic computer simulations. They then use these simulations to test robots, getting feedback from people online to judge how well the robot is doing. They also change up the simulated environments – like the textures or where objects are placed – to see if the robot can still handle things when conditions aren't perfect. This allows for a lot of testing without the risks and costs of the real world.
Why it matters?
This new testing method is important because it provides a way to reliably and quickly evaluate robots as they become more complex and capable. It addresses a major gap in robotics research by offering a scalable and reproducible benchmark for assessing how well robots trained in the real world can generalize to new situations, ultimately helping to build more versatile and useful robots.
Abstract
The pursuit of robot generalists - instructable agents capable of performing diverse tasks across diverse environments - demands rigorous and scalable evaluation. Yet real-world testing of robot policies remains fundamentally constrained: it is labor-intensive, slow, unsafe at scale, and difficult to reproduce. Existing simulation benchmarks are similarly limited, as they train and test policies within the same synthetic domains and cannot assess models trained from real-world demonstrations or alternative simulation environments. As policies expand in scope and complexity, these barriers only intensify, since defining "success" in robotics often hinges on nuanced human judgments of execution quality. In this paper, we introduce a new benchmarking framework that overcomes these challenges by shifting VLA evaluation into large-scale simulated environments augmented with online human feedback. Leveraging advances in vision-language models, 2D-to-3D generative modeling, and differentiable rendering, our approach automatically converts video demonstrations from widely used robot datasets into simulated counterparts. Within these digital twins, we assess VLA policies using both automated VLM-guided scoring and scalable human preference judgments collected from crowdworkers, transforming human involvement from tedious scene setup, resetting, and safety supervision into lightweight preference comparisons. To measure robustness, we systematically perturb simulated environments along multiple axes, such as textures and object placements, stress-testing policy generalization under controlled variation. The result is a continuously evolving, reproducible, and scalable benchmark for real-world trained robot manipulation policies, addressing a critical missing capability in today's robotics landscape.