Power Scheduler: A Batch Size and Token Number Agnostic Learning Rate Scheduler
Yikang Shen, Matthew Stallone, Mayank Mishra, Gaoyuan Zhang, Shawn Tan, Aditya Prasad, Adriana Meza Soria, David D. Cox, Rameswar Panda
2024-08-27

Summary
This paper discusses Power Scheduler, a new method for adjusting the learning rate in training language models, making it easier to find the best settings without needing to test many different options.
What's the problem?
Finding the right learning rate for training large language models is complicated because it depends on many factors like batch size and the amount of training data. Testing these settings can be very expensive and time-consuming, especially for models with billions of parameters.
What's the solution?
The authors developed a new learning rate scheduler called Power Scheduler that simplifies this process. They discovered a relationship between the optimal learning rate, batch size, and the number of training tokens through extensive experimentation. This scheduler works well regardless of how much data or how many examples are used, allowing one set of hyperparameters to be effective across different model sizes and types.
Why it matters?
This research is important because it helps make training large language models more efficient and accessible. By reducing the need for extensive testing of different settings, it allows researchers and developers to focus on improving AI models without getting bogged down by technical details.
Abstract
Finding the optimal learning rate for language model pretraining is a challenging task. This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters. Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus. While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored. In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler. After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes. Based on the observation, we propose a new learning rate scheduler, Power scheduler, that is agnostic about the number of training tokens and batch size. The experiment shows that combining the Power scheduler with Maximum Update Parameterization (muP) can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture. Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models. We open-source these pretrained models at https://ibm.biz/BdKhLa.