< Explain other AI papers

The Era of Agentic Organization: Learning to Organize with Language Models

Zewen Chi, Li Dong, Qingxiu Dong, Yaru Hao, Xun Wu, Shaohan Huang, Furu Wei

2025-10-31

The Era of Agentic Organization: Learning to Organize with Language Models

Summary

This paper introduces a new way for AI systems, specifically those using large language models, to tackle complicated problems by having different parts of the AI 'think' about sub-problems at the same time and then combine their results.

What's the problem?

Current AI systems often struggle with complex tasks because they try to solve everything sequentially, one step after another. This can be slow and sometimes leads to errors. While running parts of a problem in parallel can help with speed, it doesn't necessarily make the AI *better* at reasoning or allow it to learn how to think more efficiently.

What's the solution?

The researchers developed a method called 'Asynchronous Thinking' or AsyncThink. Imagine a project manager (the 'organizer') breaking down a big task into smaller pieces and assigning them to different team members (the 'workers'). These workers can work on their parts independently and at their own pace. The organizer then collects the results, puts them together, and creates a final answer. Crucially, the system learns *how* to best break down and assign these tasks using reinforcement learning, making it smarter over time.

Why it matters?

This research is important because it moves AI closer to being able to handle truly complex problems that require more than just one line of thought. By allowing AI to think in a more organized and parallel way, and by letting it learn how to do so, it can become faster, more accurate, and better at adapting to new challenges without needing to be specifically retrained for each one.

Abstract

We envision a new era of AI, termed agentic organization, where agents solve complex problems by working collaboratively and concurrently, enabling outcomes beyond individual intelligence. To realize this vision, we introduce asynchronous thinking (AsyncThink) as a new paradigm of reasoning with large language models, which organizes the internal thinking process into concurrently executable structures. Specifically, we propose a thinking protocol where an organizer dynamically assigns sub-queries to workers, merges intermediate knowledge, and produces coherent solutions. More importantly, the thinking structure in this protocol can be further optimized through reinforcement learning. Experiments demonstrate that AsyncThink achieves 28% lower inference latency compared to parallel thinking while improving accuracy on mathematical reasoning. Moreover, AsyncThink generalizes its learned asynchronous thinking capabilities, effectively tackling unseen tasks without additional training.