LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li
2024-08-14

Summary
This paper presents LongWriter, a new system that allows large language models (LLMs) to generate very long texts—over 10,000 words—by breaking down writing tasks into smaller parts.
What's the problem?
Even though some LLMs can handle a lot of information at once, they struggle to produce long outputs, typically limited to around 2,000 words. This limitation is mainly due to the lack of training examples that include very long texts.
What's the solution?
To solve this issue, the authors developed a tool called AgentWrite that helps break down long writing tasks into smaller subtasks. They created a new dataset called LongWriter-6k, which includes examples of texts ranging from 2,000 to 32,000 words. By training the models with this dataset, they were able to significantly increase the length of text these models can generate while keeping the quality high. They also introduced LongBench-Write, a benchmark for testing how well these models can produce ultra-long texts.
Why it matters?
This research is important because it enhances the capabilities of AI in generating long-form content, which can be useful for writing books, articles, and other extensive documents. By enabling models to create longer and more coherent texts, it opens up new possibilities for content creation in various fields.
Abstract
Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning (SFT). In other words, their output limitation is due to the scarcity of long-output examples in existing SFT datasets. To address this, we introduce AgentWrite, an agent-based pipeline that decomposes ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we construct LongWriter-6k, a dataset containing 6,000 SFT data with output lengths ranging from 2k to 32k words. By incorporating this dataset into model training, we successfully scale the output length of existing models to over 10,000 words while maintaining output quality. We also develop LongBench-Write, a comprehensive benchmark for evaluating ultra-long generation capabilities. Our 9B parameter model, further improved through DPO, achieves state-of-the-art performance on this benchmark, surpassing even much larger proprietary models. In general, our work demonstrates that existing long context LLM already possesses the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability. Our code & models are at: https://github.com/THUDM/LongWriter.