Thought Communication in Multiagent Collaboration
Yujia Zheng, Zhuokai Zhao, Zijian Li, Yaqi Xie, Mingze Gao, Lizhu Zhang, Kun Zhang
2025-10-24
Summary
This paper introduces a new way for AI agents to communicate, moving beyond just using natural language. It proposes 'thought communication,' where agents share underlying ideas directly, rather than just the words they use to express those ideas.
What's the problem?
Currently, AI agents that work together mostly rely on natural language to communicate. However, language is often unclear, indirect, and can be misinterpreted. This limits how well these agents can truly collaborate and achieve complex goals because important information can get lost in translation or be understood differently by each agent. Essentially, relying solely on what's *said* isn't enough for truly effective teamwork.
What's the solution?
The researchers developed a system that tries to uncover the 'hidden thoughts' driving each agent's behavior. They treat these thoughts as underlying variables that influence what an agent does. Using math and theory, they proved it's possible to figure out both what thoughts agents share and what thoughts are unique to each one, even without extra information. Then, they built a framework that extracts these latent thoughts from agents and helps them share the relevant ones, creating a more direct form of communication.
Why it matters?
This research is important because it suggests a way to build AI systems that can collaborate much more effectively. By allowing agents to share underlying intentions and ideas, rather than just words, we can unlock a new level of collective intelligence. This could be useful for solving problems that are too complex for any single agent or even a group of agents communicating only through language, and it opens the door to using this approach with different types of data, not just text.
Abstract
Natural language has long enabled human cooperation, but its lossy, ambiguous, and indirect nature limits the potential of collective intelligence. While machines are not subject to these constraints, most LLM-based multi-agent systems still rely solely on natural language, exchanging tokens or their embeddings. To go beyond language, we introduce a new paradigm, thought communication, which enables agents to interact directly mind-to-mind, akin to telepathy. To uncover these latent thoughts in a principled way, we formalize the process as a general latent variable model, where agent states are generated by an unknown function of underlying thoughts. We prove that, in a nonparametric setting without auxiliary information, both shared and private latent thoughts between any pair of agents can be identified. Moreover, the global structure of thought sharing, including which agents share which thoughts and how these relationships are structured, can also be recovered with theoretical guarantees. Guided by the established theory, we develop a framework that extracts latent thoughts from all agents prior to communication and assigns each agent the relevant thoughts, along with their sharing patterns. This paradigm naturally extends beyond LLMs to all modalities, as most observational data arise from hidden generative processes. Experiments on both synthetic and real-world benchmarks validate the theory and demonstrate the collaborative advantages of thought communication. We hope this work illuminates the potential of leveraging the hidden world, as many challenges remain unsolvable through surface-level observation alone, regardless of compute or data scale.