< Explain other AI papers

Exploring Expert Failures Improves LLM Agent Tuning

Li-Cheng Lan, Andrew Bai, Minhao Cheng, Cho-Jui Hsieh, Tianyi Zhou

2025-04-18

Exploring Expert Failures Improves LLM Agent Tuning

Summary

This paper talks about a new method called Exploring Expert Failures (EEF), which helps large language models get better at solving complicated tasks by learning from not just their successes, but also from the useful parts of their failed attempts.

What's the problem?

The problem is that most training methods for AI agents only focus on examples where the expert model succeeds, ignoring any failed attempts. This makes the AI really good at easy tasks but leaves it struggling with more complex problems, because it never gets to learn from the mistakes or partial progress made during failures.

What's the solution?

The researchers created EEF, which looks at failed expert attempts and picks out the actions or steps that were actually helpful, even if the overall attempt didn't work. By adding these useful pieces from failures into the training, the AI learns better strategies for tough tasks and can solve problems that it couldn't before.

Why it matters?

This matters because it shows that learning from mistakes can make AI agents much smarter and more capable, especially for challenging tasks. It also helps AI improve faster and become more reliable in real-world situations where things don’t always go perfectly.

Abstract

A new method, Exploring Expert Failures, enhances fine-tuning of Large Language Models by incorporating beneficial actions from failed expert trajectories, significantly improving performance in complex tasks like WebShop and SciWorld.