< Explain other AI papers

ReCode: Unify Plan and Action for Universal Granularity Control

Zhaoyang Yu, Jiayi Zhang, Huixue Su, Yufan Zhao, Yifan Wu, Mingyi Deng, Jinyu Xiang, Yizhang Lin, Lingxiao Tang, Yingchao Li, Yuyu Luo, Bang Liu, Chenglin Wu

2025-10-28

ReCode: Unify Plan and Action for Universal Granularity Control

Summary

This paper introduces a new way to make AI agents, called ReCode, that allows them to plan and act more like humans do, seamlessly switching between big-picture goals and small, specific steps.

What's the problem?

Current AI agents built using Large Language Models struggle with flexibility in how they make decisions. They usually treat planning and actually *doing* things as separate processes, which makes them less adaptable to changing situations and limits their ability to learn how to solve problems effectively at different levels of detail. It's like trying to build a house with completely separate teams for the blueprints and the construction – they need to work together!

What's the solution?

ReCode solves this by representing both plans and actions using code. Think of a plan as a general function that the AI then breaks down into smaller and smaller sub-functions, eventually reaching the level of individual actions it can take. This 'recursive' breakdown means there's no hard line between planning and doing, allowing the AI to adjust how detailed its thinking is as needed. This method also automatically creates a lot of training data that helps the AI learn how to make decisions at different levels of complexity.

Why it matters?

This is important because it makes AI agents much more capable and efficient. By unifying planning and action, ReCode allows agents to perform better and learn faster, bringing us closer to AI that can handle real-world tasks with the same flexibility and common sense as humans.

Abstract

Real-world tasks require decisions at varying granularities, and humans excel at this by leveraging a unified cognitive representation where planning is fundamentally understood as a high-level form of action. However, current Large Language Model (LLM)-based agents lack this crucial capability to operate fluidly across decision granularities. This limitation stems from existing paradigms that enforce a rigid separation between high-level planning and low-level action, which impairs dynamic adaptability and limits generalization. We propose ReCode (Recursive Code Generation), a novel paradigm that addresses this limitation by unifying planning and action within a single code representation. In this representation, ReCode treats high-level plans as abstract placeholder functions, which the agent then recursively decomposes into finer-grained sub-functions until reaching primitive actions. This recursive approach dissolves the rigid boundary between plan and action, enabling the agent to dynamically control its decision granularity. Furthermore, the recursive structure inherently generates rich, multi-granularity training data, enabling models to learn hierarchical decision-making processes. Extensive experiments show ReCode significantly surpasses advanced baselines in inference performance and demonstrates exceptional data efficiency in training, validating our core insight that unifying planning and action through recursive code generation is a powerful and effective approach to achieving universal granularity control. The code is available at https://github.com/FoundationAgents/ReCode.