Compose Your Policies! Improving Diffusion-based or Flow-based Robot Policies via Test-time Distribution-level Composition
Jiahang Cao, Yize Huang, Hanzhong Guo, Rui Zhang, Mu Nan, Weijian Mai, Jiaxu Wang, Hao Cheng, Jingkai Sun, Gang Han, Wen Zhao, Qiang Zhang, Yijie Guo, Qihao Zheng, Chunfeng Song, Xiao Li, Ping Luo, Andrew F. Luo
2025-10-06
Summary
This paper introduces a new way to improve how robots learn to perform tasks, specifically focusing on 'diffusion-based' models which are becoming popular in robotics. The key idea is to combine the strengths of already-trained robot 'brains' without needing to train anything new.
What's the problem?
Training robots to do things well usually requires a huge amount of data showing them how to interact with the world. Getting this data is expensive and time-consuming, which limits how quickly we can improve robot control systems. Existing diffusion models, while promising, still suffer from this data bottleneck.
What's the solution?
The researchers found that by mathematically combining the 'thinking' processes of multiple pre-trained robot policies – essentially averaging their predictions in a smart way – they could achieve better results than using any single policy alone. This method, called General Policy Composition (GPC), doesn't require any additional training; it simply mixes the outputs of existing models at the time the robot is acting. It works with different types of policies and even different ways of processing visual information.
Why it matters?
This work is important because it offers a way to significantly boost robot performance without the costly and time-consuming process of collecting more training data. It’s like getting a smarter robot by letting different 'experts' collaborate, and it’s a simple technique that can be applied to a wide range of robotic tasks and existing robot control systems, making robots more adaptable and capable.
Abstract
Diffusion-based models for robotic control, including vision-language-action (VLA) and vision-action (VA) policies, have demonstrated significant capabilities. Yet their advancement is constrained by the high cost of acquiring large-scale interaction datasets. This work introduces an alternative paradigm for enhancing policy performance without additional model training. Perhaps surprisingly, we demonstrate that the composed policies can exceed the performance of either parent policy. Our contribution is threefold. First, we establish a theoretical foundation showing that the convex composition of distributional scores from multiple diffusion models can yield a superior one-step functional objective compared to any individual score. A Gr\"onwall-type bound is then used to show that this single-step improvement propagates through entire generation trajectories, leading to systemic performance gains. Second, motivated by these results, we propose General Policy Composition (GPC), a training-free method that enhances performance by combining the distributional scores of multiple pre-trained policies via a convex combination and test-time search. GPC is versatile, allowing for the plug-and-play composition of heterogeneous policies, including VA and VLA models, as well as those based on diffusion or flow-matching, irrespective of their input visual modalities. Third, we provide extensive empirical validation. Experiments on Robomimic, PushT, and RoboTwin benchmarks, alongside real-world robotic evaluations, confirm that GPC consistently improves performance and adaptability across a diverse set of tasks. Further analysis of alternative composition operators and weighting strategies offers insights into the mechanisms underlying the success of GPC. These results establish GPC as a simple yet effective method for improving control performance by leveraging existing policies.