< Explain other AI papers

Scaling External Knowledge Input Beyond Context Windows of LLMs via Multi-Agent Collaboration

Zijun Liu, Zhennan Wan, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

2025-05-28

Scaling External Knowledge Input Beyond Context Windows of LLMs via
  Multi-Agent Collaboration

Summary

This paper talks about a new approach called ExtAgents that helps large language models use even more outside information when answering questions, without needing to make their memory or context window bigger.

What's the problem?

The problem is that language models can only look at a certain amount of information at one time, which is called the context window. This limit makes it hard for them to use lots of extra facts or knowledge when they need to answer complex questions or solve tough problems.

What's the solution?

To fix this, the researchers created a system where multiple AI agents work together and share information with each other. This way, the main language model can access much more knowledge during its reasoning process, even though its own context window hasn't gotten any bigger.

Why it matters?

This matters because it lets AI systems use more information and give better answers, making them more useful for research, learning, and solving real-world problems without needing more computer power or memory.

Abstract

The ExtAgents multi-agent framework enhances the scalability of inference-time knowledge integration in large language models, improving performance without increasing the context window.