< Explain other AI papers

Proposer-Agent-Evaluator(PAE): Autonomous Skill Discovery For Foundation Model Internet Agents

Yifei Zhou, Qianlan Yang, Kaixiang Lin, Min Bai, Xiong Zhou, Yu-Xiong Wang, Sergey Levine, Erran Li

2024-12-18

Proposer-Agent-Evaluator(PAE): Autonomous Skill Discovery For Foundation Model Internet Agents

Summary

This paper talks about the Proposer-Agent-Evaluator (PAE) system, which allows AI agents to independently discover and practice new skills without needing detailed instructions from humans.

What's the problem?

AI agents need a wide range of skills to be useful, like finding information online or performing household tasks. However, if every skill must be defined by humans with specific instructions, the agents' abilities become limited. This means they can't learn new skills on their own and may struggle to adapt to new situations.

What's the solution?

The authors propose the PAE system, which includes three main parts: a task proposer that suggests tasks for the agent to learn based on its environment, an agent policy that attempts these tasks, and an evaluator that checks how well the agent did. By using this system, the agent can practice skills in real-time and improve through a learning process called reinforcement learning (RL), where it gets feedback based on its performance. This allows the agent to discover useful skills by interacting with various websites and adapting to different tasks.

Why it matters?

This research is important because it enhances the capabilities of AI agents, making them more flexible and able to learn independently. By allowing agents to autonomously discover skills, we can create more advanced AI systems that can better assist people in everyday tasks, from browsing the internet to managing smart home devices.

Abstract

The vision of a broadly capable and goal-directed agent, such as an Internet-browsing agent in the digital world and a household humanoid in the physical world, has rapidly advanced, thanks to the generalization capability of foundation models. Such a generalist agent needs to have a large and diverse skill repertoire, such as finding directions between two travel locations and buying specific items from the Internet. If each skill needs to be specified manually through a fixed set of human-annotated instructions, the agent's skill repertoire will necessarily be limited due to the quantity and diversity of human-annotated instructions. In this work, we address this challenge by proposing Proposer-Agent-Evaluator, an effective learning system that enables foundation model agents to autonomously discover and practice skills in the wild. At the heart of PAE is a context-aware task proposer that autonomously proposes tasks for the agent to practice with context information of the environment such as user demos or even just the name of the website itself for Internet-browsing agents. Then, the agent policy attempts those tasks with thoughts and actual grounded operations in the real world with resulting trajectories evaluated by an autonomous VLM-based success evaluator. The success evaluation serves as the reward signal for the agent to refine its policies through RL. We validate PAE on challenging vision-based web navigation, using both real-world and self-hosted websites from WebVoyager and WebArena.To the best of our knowledge, this work represents the first effective learning system to apply autonomous task proposal with RL for agents that generalizes real-world human-annotated benchmarks with SOTA performances. Our open-source checkpoints and code can be found in https://yanqval.github.io/PAE/