< Explain other AI papers

APOLLO: SGD-like Memory, AdamW-level Performance

Hanqing Zhu, Zhenyu Zhang, Wenyan Cong, Xi Liu, Sem Park, Vikas Chandra, Bo Long, David Z. Pan, Zhangyang Wang, Jinwon Lee

2024-12-09

APOLLO: SGD-like Memory, AdamW-level Performance

Summary

This paper talks about APOLLO, a new memory-efficient optimizer designed to improve the training of large language models (LLMs) by reducing the amount of memory they use while maintaining high performance.

What's the problem?

Training large language models is often very demanding in terms of memory, especially when using popular optimizers like AdamW. This can require expensive hardware or force researchers to use smaller batch sizes, which limits how effectively they can train these models.

What's the solution?

The authors propose APOLLO, which simplifies the learning rate adjustment process used in AdamW and allows for a more memory-efficient way to optimize LLMs. APOLLO uses a method called Approximated Gradient Scaling that reduces memory usage while still delivering strong performance. They tested APOLLO and found that it performs as well as or better than AdamW but uses significantly less memory, making it easier to train large models on less powerful hardware.

Why it matters?

This research is important because it makes training large language models more accessible by lowering the memory requirements. With APOLLO, more researchers can work on developing advanced AI systems without needing top-of-the-line computers, which could lead to more innovations in natural language processing and other AI applications.

Abstract

Large language models (LLMs) are notoriously memory-intensive during training, particularly with the popular AdamW optimizer. This memory burden necessitates using more or higher-end GPUs or reducing batch sizes, limiting training scalability and throughput. To address this, various memory-efficient optimizers have been proposed to reduce optimizer memory usage. However, they face critical challenges: (i) reliance on costly SVD operations; (ii) significant performance trade-offs compared to AdamW; and (iii) still substantial optimizer memory overhead to maintain competitive performance. In this work, we identify that AdamW's learning rate adaptation rule can be effectively coarsened as a structured learning rate update. Based on this insight, we propose Approximated Gradient Scaling for Memory-Efficient LLM Optimization (APOLLO), which approximates learning rate scaling using an auxiliary low-rank optimizer state based on pure random projection. This structured learning rate update rule makes APOLLO highly tolerant to further memory reductions while delivering comparable pre-training performance. Even its rank-1 variant, APOLLO-Mini, achieves superior pre-training performance compared to AdamW with SGD-level memory costs. Extensive experiments demonstrate that the APOLLO series performs on-par with or better than AdamW, while achieving greater memory savings by nearly eliminating the optimization states of AdamW. These savings provide significant system-level benefits: (1) Enhanced Throughput: 3x throughput on an 8xA100-80GB setup compared to AdamW by supporting 4x larger batch sizes. (2) Improved Model Scalability: Pre-training LLaMA-13B with naive DDP on A100-80GB GPUs without system-level optimizations. (3) Low-End GPU Friendly Pre-training: Pre-training LLaMA-7B on a single GPU using less than 12 GB of memory with weight quantization.