< Explain other AI papers

Cost-Optimal Grouped-Query Attention for Long-Context LLMs

Yingfa Chen, Yutong Wu, Xu Han, Zhiyuan Liu, Maosong Sun

2025-03-13

Cost-Optimal Grouped-Query Attention for Long-Context LLMs

Summary

This paper talks about finding the best way to build AI language models that handle long texts efficiently by balancing how many 'attention heads' they use (parts that focus on different text areas) to save computer power without losing smarts.

What's the problem?

Making AI models that understand long documents or conversations is expensive because they use too much computer memory and energy, especially when they have lots of attention heads spread out everywhere.

What's the solution?

The researchers tested different combinations of model sizes and attention head setups, showing that bigger models with fewer, smarter-placed heads work better for long texts while using less energy and memory.

Why it matters?

This helps companies and researchers create AI tools that can handle books, legal documents, or long chats without needing supercomputers, making them cheaper and greener to run.

Abstract

Building effective and efficient Transformer-based large language models (LLMs) has recently become a research focus, requiring maximizing model language capabilities and minimizing training and deployment costs. Existing efforts have primarily described complex relationships among model performance, parameter size, and data size, as well as searched for the optimal compute allocation to train LLMs. However, they overlook the impacts of context length and attention head configuration (the number of query and key-value heads in grouped-query attention) on training and inference. In this paper, we systematically compare models with different parameter sizes, context lengths, and attention head configurations in terms of model performance, computational cost, and memory cost. Then, we extend the existing scaling methods, which are based solely on parameter size and training compute, to guide the construction of cost-optimal LLMs during both training and inference. Our quantitative scaling studies show that, when processing sufficiently long sequences, a larger model with fewer attention heads can achieve a lower loss while incurring lower computational and memory costs. Our findings provide valuable insights for developing practical LLMs, especially in long-context processing scenarios. We will publicly release our code and data.