RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation
Boyang Wang, Haoran Zhang, Shujie Zhang, Jinkun Hao, Mingda Jia, Qi Lv, Yucheng Mao, Zhaoyang Lyu, Jia Zeng, Xudong Xu, Jiangmiao Pang
2026-01-09
Summary
This paper focuses on improving how robots learn to manipulate objects by giving them more training data. It tackles the challenge of getting enough diverse and useful data for robots to learn effectively.
What's the problem?
Training robots to do things like grasp objects requires a lot of visual data showing them different scenarios. Collecting this data in the real world is hard because it takes time, money, and you need to set up many different environments. While some researchers have tried using AI image generators to create more data, these methods often don't create realistic or consistent views of the scene, especially the kind of multi-angle, continuous video that robots actually need to understand what's happening. Simply telling the AI what to create with text isn't precise enough either.
What's the solution?
The researchers came up with a new way to guide the AI image generator. Instead of just using text descriptions, they showed the AI example images of the kinds of scenes they wanted it to create – this is called 'visual identity prompting'. They also built a system to automatically collect a large library of these example images from existing robot datasets. This allows the AI to generate more realistic and useful training data for robots, showing them different viewpoints and consistent changes over time.
Why it matters?
This work is important because it makes it easier and cheaper to train robots to perform complex manipulation tasks. By generating more realistic and relevant training data, robots can learn faster and perform better in real-world situations, both in simulated environments and when actually interacting with the physical world. This could lead to robots being more helpful in homes, factories, and other places.
Abstract
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.