Steering Your Generalists: Improving Robotic Foundation Models via Value Guidance
Mitsuhiko Nakamoto, Oier Mees, Aviral Kumar, Sergey Levine
2024-10-24

Summary
This paper introduces a method called Value-Guided Policy Steering (V-GPS) that improves the performance of general-purpose robotic policies by re-ranking their actions based on learned value functions.
What's the problem?
Robots are often trained using a variety of demonstrations, but the quality of these demonstrations can be inconsistent. This means that when robots are deployed, they may not perform as well as expected because they might rely on suboptimal data or actions.
What's the solution?
The authors propose V-GPS, which enhances robot performance at deployment by adjusting the order of their actions according to a value function learned from previous experiences. This method works with different robotic policies without needing to change the underlying models, allowing for better action choices across various tasks and platforms.
Why it matters?
This research is important because it provides a way to improve how robots operate in real-world situations, making them more effective at completing tasks. By enhancing the decision-making process of robots, we can increase their usefulness in many applications, such as manufacturing, healthcare, and service industries.
Abstract
Large, general-purpose robotic policies trained on diverse demonstration datasets have been shown to be remarkably effective both for controlling a variety of robots in a range of different scenes, and for acquiring broad repertoires of manipulation skills. However, the data that such policies are trained on is generally of mixed quality -- not only are human-collected demonstrations unlikely to perform the task perfectly, but the larger the dataset is, the harder it is to curate only the highest quality examples. It also remains unclear how optimal data from one embodiment is for training on another embodiment. In this paper, we present a general and broadly applicable approach that enhances the performance of such generalist robot policies at deployment time by re-ranking their actions according to a value function learned via offline RL. This approach, which we call Value-Guided Policy Steering (V-GPS), is compatible with a wide range of different generalist policies, without needing to fine-tune or even access the weights of the policy. We show that the same value function can improve the performance of five different state-of-the-art policies with different architectures, even though they were trained on distinct datasets, attaining consistent performance improvement on multiple robotic platforms across a total of 12 tasks. Code and videos can be found at: https://nakamotoo.github.io/V-GPS