< Explain other AI papers

Variance Control via Weight Rescaling in LLM Pre-training

Louis Owen, Abhay Kumar, Nilabhra Roy Chowdhury, Fabian Güra

2025-03-25

Variance Control via Weight Rescaling in LLM Pre-training

Summary

This paper talks about how to better train large language models (LLMs) by controlling how the numbers inside the model change during training.

What's the problem?

The way these models are started and how their values change during training can greatly affect how well they perform, but it's not always clear how to manage this.

What's the solution?

The researchers came up with two new techniques to control these values, called Layer Index Rescaling (LIR) and Target Variance Rescaling (TVR).

Why it matters?

This work matters because it can lead to LLMs that perform better on various tasks and are easier to work with, especially when trying to make them smaller or faster.

Abstract

The outcome of Large Language Model (LLM) pre-training strongly depends on weight initialization and variance control strategies. Although the importance of initial variance control has been well documented in neural networks in general, the literature on initialization and management of its growth during LLM pre-training, specifically, is somewhat sparse. In this paper, we introduce the Layer Index Rescaling (LIR) weight initialization scheme, and the Target Variance Rescaling (TVR) variance control strategy. Experiments on a 1B parameter LLaMA model demonstrate that better variance management using these techniques yields substantial improvements in downstream task performance (up to 4.6% on common pre-training benchmarks) and reduces extreme activation values, thus mitigating challenges associated with quantization and low-precision training. Our code is available at: https://github.com/bluorion-com/weight_rescaling.