Learning while Deploying: Fleet-Scale Reinforcement Learning for Generalist Robot Policies
Yi Wang, Xinchen Li, Pengwei Xie, Pu Yang, Buqing Nie, Yunuo Cai, Qinglin Zhang, Chendi Qu, Jeffrey Wu, Jianheng Song, Xinlin Ren, Jingshun Huang, Mingjie Pan, Siyuan Feng, Zhi Chen, Jianlan Luo
2026-05-04
Summary
This paper introduces a new way to continuously improve robots after they've already been trained, allowing them to get better at tasks while actually being used in the real world.
What's the problem?
Robots trained using existing methods often struggle when faced with the unpredictable nature of real-world environments. Training data collected beforehand can't possibly cover every situation a robot might encounter, leading to failures when things change slightly or when tasks are complex and take a long time to complete. Also, robots don't get the benefit of learning from human feedback or adapting to new situations as they work.
What's the solution?
The researchers developed a system called Learning While Deploying (LWD) where robots learn from their own experiences and from each other while performing tasks. A fleet of robots tries things out, and when they succeed or fail, that information is shared to improve the overall policy. To make this learning process stable and effective, they combined two techniques: one that helps the robot accurately estimate how good different actions are, and another that helps it choose the best actions based on that information. This system works with robots that understand both language and vision, allowing them to handle a variety of tasks.
Why it matters?
This research is important because it allows robots to become more reliable and adaptable in real-world settings. Instead of needing to be constantly reprogrammed, they can continuously learn and improve on their own, making them more useful for tasks like assisting in warehouses or even helping people at home. The success shown with complex, long-duration tasks suggests this approach can unlock more advanced robotic capabilities.
Abstract
Generalist robot policies increasingly benefit from large-scale pretraining, but offline data alone is insufficient for robust real-world deployment. Deployed robots encounter distribution shifts, long-tail failures, task variations, and human correction opportunities that fixed demonstration datasets cannot fully capture. We present Learning While Deploying (LWD), a fleet-scale offline-to-online reinforcement learning framework for continual post-training of generalist Vision-Language-Action (VLA) policies. Starting from a pretrained VLA policy, LWD closes the loop between deployment, shared physical experience, policy improvement, and redeployment by using autonomous rollouts and human interventions collected across a robot fleet. To stabilize learning from heterogeneous, sparse-reward fleet data, LWD combines Distributional Implicit Value Learning (DIVL) for robust value estimation with Q-learning via Adjoint Matching (QAM) for policy extraction in flow-based VLA action generators. We validate LWD on a fleet of 16 dual-arm robots across eight real-world manipulation tasks, including semantic grocery restocking and 3--5 minute long-horizon tasks. A single generalist policy improves as fleet experience accumulates, reaching an average success rate of 95%, with the largest gains on long-horizon tasks.