< Explain other AI papers

LLM$\times$MapReduce: Simplified Long-Sequence Processing using Large Language Models

Zihan Zhou, Chong Li, Xinyi Chen, Shuo Wang, Yu Chao, Zhili Li, Haoyu Wang, Rongqiao An, Qi Shi, Zhixing Tan, Xu Han, Xiaodong Shi, Zhiyuan Liu, Maosong Sun

2024-10-16

LLM$\times$MapReduce: Simplified Long-Sequence Processing using Large Language Models

Summary

This paper introduces LLM×MapReduce, a new framework that helps large language models (LLMs) process long texts more effectively by breaking them into smaller parts.

What's the problem?

As LLMs are used for tasks involving very long texts, they often struggle because they can only remember a limited amount of information at once. When these texts are split into chunks, important details can get lost, making it hard for the model to give complete or accurate answers. This loss of information can happen in two ways: sometimes the chunks are too dependent on each other (inter-chunk dependency), and other times they conflict with each other (inter-chunk conflict).

What's the solution?

The authors propose a method called LLM×MapReduce that divides long documents into smaller sections for the model to read individually. After processing each chunk, it combines the answers to create a final response. To handle the issues of lost information, they introduce a structured way to share information between chunks and a method to check and adjust the confidence of the answers from different chunks. This approach allows the model to maintain better understanding and coherence when dealing with long texts.

Why it matters?

This research is important because it improves how AI models can analyze and understand lengthy documents, which is useful in many fields like education, law, and research. By enabling LLMs to handle longer texts more effectively, this framework can enhance their usefulness in real-world applications where detailed understanding is crucial.

Abstract

Enlarging the context window of large language models (LLMs) has become a crucial research area, particularly for applications involving extremely long texts. In this work, we propose a novel training-free framework for processing long texts, utilizing a divide-and-conquer strategy to achieve comprehensive document understanding. The proposed LLMtimesMapReduce framework splits the entire document into several chunks for LLMs to read and then aggregates the intermediate answers to produce the final output. The main challenge for divide-and-conquer long text processing frameworks lies in the risk of losing essential long-range information when splitting the document, which can lead the model to produce incomplete or incorrect answers based on the segmented texts. Disrupted long-range information can be classified into two categories: inter-chunk dependency and inter-chunk conflict. We design a structured information protocol to better cope with inter-chunk dependency and an in-context confidence calibration mechanism to resolve inter-chunk conflicts. Experimental results demonstrate that LLMtimesMapReduce can outperform representative open-source and commercial long-context LLMs, and is applicable to several different models.