SRPO: Self-Referential Policy Optimization for Vision-Language-Action Models
Senyu Fei, Siyin Wang, Li Ji, Ao Li, Shiduo Zhang, Liming Liu, Jinlong Hou, Jingjing Gong, Xianzhong Zhao, Xipeng Qiu
2025-11-21
Summary
This paper introduces a new method, called Self-Referential Policy Optimization (SRPO), to improve how robots learn to perform tasks using both vision and language instructions.
What's the problem?
Current robots that understand vision and language often learn by watching humans demonstrate tasks, which can limit their abilities because they only learn what they've been shown. While reinforcement learning can help them improve, it's difficult because robots often fail during learning, and simply knowing if a task succeeded or failed doesn't give the robot enough information to learn effectively. It's like trying to learn to ride a bike only knowing if you fell or not – you need to know *how* you were falling to correct yourself.
What's the solution?
SRPO allows the robot to learn from its *own* attempts. Instead of relying on human demonstrations or manually designed rewards, the robot uses its successful attempts within a training session as a benchmark. When it tries something and fails, it compares its actions to those successful attempts, using a 'world model' to understand how far off it was from success. This 'world model' is a way for the robot to understand the environment in a simplified, compressed form, allowing it to generalize learning across different situations without needing specific adjustments for each one.
Why it matters?
This research is important because it significantly improves a robot's ability to learn complex tasks without constant human guidance. The new method achieves a very high success rate with relatively little training, and it works well even in challenging and varied environments, meaning robots can become more adaptable and useful in real-world situations.
Abstract
Vision-Language-Action (VLA) models excel in robotic manipulation but are constrained by their heavy reliance on expert demonstrations, leading to demonstration bias and limiting performance. Reinforcement learning (RL) is a vital post-training strategy to overcome these limits, yet current VLA-RL methods, including group-based optimization approaches, are crippled by severe reward sparsity. Relying on binary success indicators wastes valuable information in failed trajectories, resulting in low training efficiency. To solve this, we propose Self-Referential Policy Optimization (SRPO), a novel VLA-RL framework. SRPO eliminates the need for external demonstrations or manual reward engineering by leveraging the model's own successful trajectories, generated within the current training batch, as a self-reference. This allows us to assign a progress-wise reward to failed attempts. A core innovation is the use of latent world representations to measure behavioral progress robustly. Instead of relying on raw pixels or requiring domain-specific fine-tuning, we utilize the compressed, transferable encodings from a world model's latent space. These representations naturally capture progress patterns across environments, enabling accurate, generalized trajectory comparison. Empirical evaluations on the LIBERO benchmark demonstrate SRPO's efficiency and effectiveness. Starting from a supervised baseline with 48.9% success, SRPO achieves a new state-of-the-art success rate of 99.2% in just 200 RL steps, representing a 103% relative improvement without any extra supervision. Furthermore, SRPO shows substantial robustness, achieving a 167% performance improvement on the LIBERO-Plus benchmark.