Optimizing Large Language Model Training Using FP4 Quantization
Ruizhe Wang, Yeyun Gong, Xiao Liu, Guoshuai Zhao, Ziyue Yang, Baining Guo, Zhengjun Zha, Peng Cheng
2025-01-29

Summary
This paper talks about a new way to train large language models (LLMs) using a very low-precision number format called FP4. The researchers created a special system that allows these huge AI models to be trained more efficiently without losing much accuracy.
What's the problem?
Training big AI language models takes a lot of computer power and memory. People have tried using lower precision numbers (like FP8) to make things faster and use less memory, but going down to FP4 was really hard because it could mess up the AI's learning process and make it less accurate.
What's the solution?
The researchers came up with two clever tricks to make FP4 work for training LLMs. First, they created a special math tool that helps the AI learn more accurately even with the less precise FP4 numbers. Second, they figured out a way to handle extreme values that pop up during training without losing important information. They put these ideas together into a complete system for training LLMs and tested it on models with up to 13 billion parameters.
Why it matters?
This research matters because it could make training big AI models much cheaper and faster. If we can use FP4 instead of higher precision formats, we might be able to create even larger and smarter AI systems using the same amount of computer resources. This could lead to breakthroughs in AI capabilities and make advanced AI more accessible to researchers and companies with limited budgets. As new computer chips that support FP4 calculations become available, this work could become even more important for the future of AI development.
Abstract
The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training.