Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion
Yongyuan Liang, Tingqiang Xu, Kaizhe Hu, Guangqi Jiang, Furong Huang, Huazhe Xu
2024-07-16

Summary
This paper presents Make-An-Agent, a new system that generates control policies for robots using just one example of desired behavior, similar to how images can be created from descriptions.
What's the problem?
Creating effective control policies for robots usually requires extensive training with many examples, which can be time-consuming and complex. This makes it difficult to quickly adapt robots to new tasks or behaviors, especially when only limited examples are available.
What's the solution?
Make-An-Agent uses a technique called behavior-prompted diffusion to generate policy networks based on behavior embeddings, which capture important information about how an agent should act. By training on a wide range of tasks and using just a single demonstration as input, the system can create effective policies that work well in various situations. This approach allows the model to generalize and perform well even on tasks it hasn't seen before, making it versatile and efficient.
Why it matters?
This research is significant because it simplifies the process of training robots to perform different tasks. By enabling robots to learn from just one example, Make-An-Agent can help speed up the development of robotic systems in real-world applications, such as manufacturing or service industries, where quick adaptation is crucial.
Abstract
Can we generate a control policy for an agent using just one demonstration of desired behaviors as a prompt, as effortlessly as creating an image from a textual description? In this paper, we present Make-An-Agent, a novel policy parameter generator that leverages the power of conditional diffusion models for behavior-to-policy generation. Guided by behavior embeddings that encode trajectory information, our policy generator synthesizes latent parameter representations, which can then be decoded into policy networks. Trained on policy network checkpoints and their corresponding trajectories, our generation model demonstrates remarkable versatility and scalability on multiple tasks and has a strong generalization ability on unseen tasks to output well-performed policies with only few-shot demonstrations as inputs. We showcase its efficacy and efficiency on various domains and tasks, including varying objectives, behaviors, and even across different robot manipulators. Beyond simulation, we directly deploy policies generated by Make-An-Agent onto real-world robots on locomotion tasks.