< Explain other AI papers

DynaSaur: Large Language Agents Beyond Predefined Actions

Dang Nguyen, Viet Dac Lai, Seunghyun Yoon, Ryan A. Rossi, Handong Zhao, Ruiyi Zhang, Puneet Mathur, Nedim Lipka, Yu Wang, Trung Bui, Franck Dernoncourt, Tianyi Zhou

2024-11-05

DynaSaur: Large Language Agents Beyond Predefined Actions

Summary

This paper introduces DynaSaur, a new framework for large language model (LLM) agents that allows them to create and perform actions dynamically rather than relying on a fixed set of predefined actions. This makes the agents more flexible and capable in real-world situations.

What's the problem?

Most existing LLM agent systems can only choose from a limited list of actions, which restricts their ability to adapt and solve problems effectively. This approach requires a lot of effort to define all possible actions, which becomes impractical in complex environments with many potential scenarios. As a result, these agents may struggle when faced with unexpected situations or tasks that don't fit neatly into their predefined options.

What's the solution?

DynaSaur addresses these challenges by enabling LLM agents to generate and execute new actions on the fly using a general-purpose programming language. Instead of being limited to predefined actions, the agents can create custom functions as needed. This allows them to adapt to new situations and reuse previously created actions for efficiency. The authors conducted extensive tests on the GAIA benchmark and found that DynaSaur significantly outperformed previous methods in terms of flexibility and problem-solving capabilities.

Why it matters?

This research is important because it enhances the capabilities of AI agents, allowing them to operate more effectively in dynamic and complex environments. By enabling LLMs to create their own actions, DynaSaur opens up new possibilities for applications in robotics, virtual assistants, and any field where intelligent decision-making is required. This could lead to more advanced AI systems that can handle a wider range of tasks with less human intervention.

Abstract

Existing LLM agent systems typically select actions from a fixed and predefined set at every step. While this approach is effective in closed, narrowly-scoped environments, we argue that it presents two major challenges when deploying LLM agents in real-world scenarios: (1) selecting from a fixed set of actions significantly restricts the planning and acting capabilities of LLM agents, and (2) this approach requires substantial human effort to enumerate and implement all possible actions, which becomes impractical in complex environments with a vast number of potential actions. In this work, we propose an LLM agent framework that enables the dynamic creation and composition of actions in an online manner. In this framework, the agent interacts with the environment by generating and executing programs written in a general-purpose programming language at each step. Furthermore, generated actions are accumulated over time for future reuse. Our extensive experiments on the GAIA benchmark demonstrate that this framework offers significantly greater flexibility and outperforms previous methods. Notably, it allows an LLM agent to recover in scenarios where no relevant action exists in the predefined set or when existing actions fail due to unforeseen edge cases. At the time of writing, we hold the top position on the GAIA public leaderboard. Our code can be found in https://github.com/adobe-research/dynasaur{https://github.com/adobe-research/dynasaur}.