< Explain other AI papers

Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs

Song Bian, Tao Yu, Shivaram Venkataraman, Youngsuk Park

2025-10-24

Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs

Summary

This paper investigates how to build better large language models (LLMs) that are both accurate and fast to use. It focuses on finding the right balance between model size, how the model is structured internally, and the amount of data it's trained on.

What's the problem?

As LLMs get more powerful, they also become much more expensive to run, meaning it takes more computing power and time to get an answer from them. While we know making models bigger and training them on more data generally improves performance, it doesn't tell us the *best* way to build these models to get the most accuracy for the least cost. There's a gap in understanding how different design choices impact both speed and accuracy.

What's the solution?

The researchers developed a new way to predict the best architecture for an LLM, considering factors like the size of the internal processing units, how much of the model is dedicated to different tasks (like understanding relationships between words versus processing information), and a technique called grouped-query attention. They trained over 200 different models with varying sizes and structures, then used this data to create a 'scaling law' that helps determine the optimal design for a given level of computing resources. This scaling law builds upon existing ideas but adds in architectural details.

Why it matters?

This work is important because it provides a practical guide for building LLMs that are both highly accurate *and* efficient. By using their scaling law and search framework, developers can create models that perform better than existing open-source options like LLaMA, achieving higher accuracy and faster response times with the same amount of training. This makes powerful language models more accessible and usable in real-world applications.

Abstract

Scaling the number of parameters and the size of training data has proven to be an effective strategy for improving large language model (LLM) performance. Yet, as these models grow increasingly powerful and widely deployed, the cost of inference has become a pressing concern. Despite its importance, the trade-off between model accuracy and inference efficiency remains underexplored. In this work, we examine how key architectural factors, hidden size, the allocation of parameters between MLP and attention (mlp-to-attention ratio), and grouped-query attention (GQA), influence both inference cost and accuracy. We introduce a conditional scaling law that augments the Chinchilla framework with architectural information, along with a search framework for identifying architectures that are simultaneously inference-efficient and accurate. To validate our approach, we train more than 200 models spanning 80M to 3B parameters and 8B to 100B training tokens, and fit the proposed conditional scaling law. Our results show that the conditional scaling law reliably predicts optimal architectural choices and that the resulting models outperform existing open-source baselines. Under the same training budget, optimized architectures achieve up to 2.1% higher accuracy and 42% greater inference throughput compared to LLaMA-3.2.