High-Fidelity Simulated Data Generation for Real-World Zero-Shot Robotic Manipulation Learning with Gaussian Splatting
Haoyu Zhao, Cheng Zeng, Linghao Zhuang, Yaxi Zhao, Shengke Xue, Hao Wang, Xingyue Zhao, Zhongyu Li, Kehan Li, Siteng Huang, Mingxiu Chen, Xin Li, Deli Zhao, Hua Zou
2025-10-14
Summary
This paper introduces a new system called RoboSimGS that aims to make it easier to train robots using simulations that closely match the real world, ultimately reducing the need for expensive and time-consuming real-world robot training data.
What's the problem?
Training robots is hard because getting enough real-world data is expensive and takes a lot of effort. While simulations are a good alternative, they often look and behave differently than the real world – things might look different visually, or physics might not work the same way, making it difficult for a robot trained in simulation to perform tasks successfully in reality. This disconnect is known as the 'sim-to-real gap'.
What's the solution?
RoboSimGS tackles this problem by creating highly realistic simulations from real-world images. It uses a technique called 3D Gaussian Splatting to make the simulation look very similar to the real environment, and combines that with standard physics-based shapes for objects so the robot can interact with them realistically. A key innovation is using a powerful AI language model to automatically figure out how objects should behave physically – things like how stiff they are, or if they have hinges or sliding parts – just by looking at images of them. This allows for the creation of complex, interactive simulations without manual programming.
Why it matters?
This work is important because it offers a way to significantly reduce the cost and effort of training robots. By creating simulations that are much more accurate and physically realistic, robots can be trained entirely in simulation and then successfully deployed in the real world without needing extensive real-world training. It also improves existing robot training methods by providing better simulation data, making robots more capable and adaptable.
Abstract
The scalability of robotic learning is fundamentally bottlenecked by the significant cost and labor of real-world data collection. While simulated data offers a scalable alternative, it often fails to generalize to the real world due to significant gaps in visual appearance, physical properties, and object interactions. To address this, we propose RoboSimGS, a novel Real2Sim2Real framework that converts multi-view real-world images into scalable, high-fidelity, and physically interactive simulation environments for robotic manipulation. Our approach reconstructs scenes using a hybrid representation: 3D Gaussian Splatting (3DGS) captures the photorealistic appearance of the environment, while mesh primitives for interactive objects ensure accurate physics simulation. Crucially, we pioneer the use of a Multi-modal Large Language Model (MLLM) to automate the creation of physically plausible, articulated assets. The MLLM analyzes visual data to infer not only physical properties (e.g., density, stiffness) but also complex kinematic structures (e.g., hinges, sliding rails) of objects. We demonstrate that policies trained entirely on data generated by RoboSimGS achieve successful zero-shot sim-to-real transfer across a diverse set of real-world manipulation tasks. Furthermore, data from RoboSimGS significantly enhances the performance and generalization capabilities of SOTA methods. Our results validate RoboSimGS as a powerful and scalable solution for bridging the sim-to-real gap.