Towards Internet-Scale Training For Agents
Brandon Trabucco, Gunnar Sigurdsson, Robinson Piramuthu, Ruslan Salakhutdinov
2025-02-11
Summary
This paper talks about a new way to train AI agents to navigate websites without relying on human input. The researchers created a system that uses AI to generate tasks, complete them, and evaluate the results across 150,000 different websites.
What's the problem?
Currently, training AI to navigate websites requires a lot of human input, which is time-consuming and limits how much the AI can learn. This method isn't efficient enough to cover the vast number of websites that exist on the internet.
What's the solution?
The researchers developed a three-step process using large language models (LLMs). First, an AI creates tasks for 150,000 websites. Then, AI agents try to complete these tasks. Finally, another AI reviews how well the tasks were done. This system can detect harmful content, create doable tasks, and judge success almost as well as humans can. They also found that mixing this AI-generated data with human-created data helps the AI agents perform better and work on a wider variety of websites.
Why it matters?
This matters because it could make AI much better at understanding and using the internet without needing constant human guidance. It could lead to more capable virtual assistants, improved web accessibility for people with disabilities, and better automated systems for tasks like customer service or information gathering across the entire internet. This approach also shows that AI can learn to navigate new websites it hasn't seen before, which is a big step towards more flexible and adaptable AI systems.
Abstract
The predominant approach for training web navigation agents gathers human demonstrations for a set of popular websites and hand-written tasks, but it is becoming clear that human data are an inefficient resource. We develop a pipeline to facilitate Internet-scale training for agents without laborious human annotations. In the first stage, an <PRE_TAG>LLM</POST_TAG> generates tasks for 150k diverse websites. In the next stage, <PRE_TAG><PRE_TAG>LLM</POST_TAG> agents</POST_TAG> complete tasks and produce trajectories. In the final stage, an <PRE_TAG>LLM</POST_TAG> reviews the trajectories and judges their success. Language models are competitive with human annotators, detecting and filtering out harmful content with an accuracy of 97%, generating feasible tasks with an 89% rate, and judging successful trajectories with an 82.6% accuracy. Scaling the pipeline, agents based on Llama 3.1 70B solve 16.7% of tasks for 150k sites. Training on the data generated by our pipeline is competitive with training on human demonstrations. In data-limited settings derived from Mind2Web and WebLINX, we improve Step Accuracy by up to +89.5% and +122.1% respectively for agents trained on mixtures of data from our pipeline, and human data. When training agents with all available human data from these benchmarks, agents fail to generalize to diverse real sites, and adding our data improves their generalization by +149.0% for WebLINX and +156.3% for Mind2Web. Code will be available at: data-for-agents.github.io.