Multi-LLM Text Summarization
Jiangnan Fang, Cheng-Tse Liu, Jieun Kim, Yash Bhedaru, Ethan Liu, Nikhil Singh, Nedim Lipka, Puneet Mathur, Nesreen K. Ahmed, Franck Dernoncourt, Ryan A. Rossi, Hanieh Deilamsalehy
2024-12-23

Summary
This paper talks about a new framework for summarizing text using multiple large language models (LLMs). It explores two strategies: centralized and decentralized, to improve the quality of summaries generated from text.
What's the problem?
Traditional text summarization methods often rely on a single LLM to create summaries, which can limit the diversity and quality of the output. This can result in less informative or overly simplistic summaries that don't capture all the important details from the original text.
What's the solution?
The authors propose a Multi-LLM summarization framework that involves two main steps: generating summaries and evaluating them. In the centralized approach, multiple LLMs generate different summaries, but only one LLM is used to evaluate and pick the best summary. In the decentralized approach, all LLMs contribute to both generating and evaluating the summaries. The study shows that using multiple LLMs significantly improves summary quality, achieving up to three times better results compared to using just one LLM.
Why it matters?
This research is important because it demonstrates that using multiple AI models can lead to better text summarization. This has practical applications in many areas, such as news aggregation, academic research, and content creation, where high-quality summaries are essential for quickly understanding large amounts of information.
Abstract
In this work, we propose a Multi-LLM summarization framework, and investigate two different multi-LLM strategies including centralized and decentralized. Our multi-LLM summarization framework has two fundamentally important steps at each round of conversation: generation and evaluation. These steps are different depending on whether our multi-LLM decentralized summarization is used or centralized. In both our multi-LLM decentralized and centralized strategies, we have k different LLMs that generate diverse summaries of the text. However, during evaluation, our multi-LLM centralized summarization approach leverages a single LLM to evaluate the summaries and select the best one whereas k LLMs are used for decentralized multi-LLM summarization. Overall, we find that our multi-LLM summarization approaches significantly outperform the baselines that leverage only a single LLM by up to 3x. These results indicate the effectiveness of multi-LLM approaches for summarization.