< Explain other AI papers

DynaAct: Large Language Model Reasoning with Dynamic Action Spaces

Xueliang Zhao, Wei Wu, Jian Guan, Qintong Li, Lingpeng Kong

2025-11-12

DynaAct: Large Language Model Reasoning with Dynamic Action Spaces

Summary

This paper introduces a new method called DynaAct that helps computers make better decisions in complex situations by figuring out the best possible actions to consider.

What's the problem?

When computers try to solve problems step-by-step, they need to decide what actions to take at each step. Traditionally, people have to manually define these actions, which doesn't work well for complicated problems because it's hard to think of everything. Alternatively, computers could consider *all* possible actions, but that takes way too much computing power and time. So, the challenge is finding a good set of actions without being limited by manual definitions or overwhelmed by too many options.

What's the solution?

DynaAct tackles this problem in two main steps. First, it uses a powerful language model to get a general idea of the kinds of actions that are useful across many different problems. Think of it like brainstorming a bunch of potential action 'sketches'. Then, it carefully selects a small, diverse group of these actions that are both helpful for the current situation *and* different enough from each other to cover a wide range of possibilities. It uses a clever mathematical approach to pick the best combination of actions, ensuring they're useful and not redundant.

Why it matters?

This research is important because it allows computers to solve complex problems more efficiently and effectively. By automatically creating a smart set of actions, DynaAct avoids the limitations of manually defined actions and the computational cost of considering everything. This could lead to improvements in areas like robotics, game playing, and artificial intelligence in general, allowing machines to reason and act more intelligently.

Abstract

In modern sequential decision-making systems, the construction of an optimal candidate action space is critical to efficient inference. However, existing approaches either rely on manually defined action spaces that lack scalability or utilize unstructured spaces that render exhaustive search computationally prohibitive. In this paper, we propose a novel framework named DynaAct for automatically constructing a compact action space to enhance sequential reasoning in complex problem-solving scenarios. Our method first estimates a proxy for the complete action space by extracting general sketches observed in a corpus covering diverse complex reasoning problems using large language models. We then formulate a submodular function that jointly evaluates candidate actions based on their utility to the current state and their diversity, and employ a greedy algorithm to select an optimal candidate set. Extensive experiments on six diverse standard benchmarks demonstrate that our approach significantly improves overall performance, while maintaining efficient inference without introducing substantial latency. The implementation is available at https://github.com/zhaoxlpku/DynaAct.