HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization
Zhijian Zhuo, Yutao Zeng, Ya Wang, Sijun Zhang, Jian Yang, Xiaoqing Li, Xun Zhou, Jinwen Ma
2025-03-07
Summary
This paper talks about HybridNorm, a new method to improve how transformer models, like those used in AI language systems, are trained by combining two existing techniques for better stability and performance
What's the problem?
Training deep transformer models is challenging because the placement of layer normalization, which helps stabilize training, can either make the process easier but less effective (Pre-Norm) or more effective but harder to train (Post-Norm). This trade-off limits the performance of these models
What's the solution?
The researchers created HybridNorm, which combines the strengths of both Pre-Norm and Post-Norm. They applied a special type of normalization called QKV Norm in the attention mechanism for stability and used Post-Norm in the feed-forward network for better performance. This approach was tested on various benchmarks and consistently outperformed traditional methods while improving training efficiency
Why it matters?
This matters because it makes training large AI models faster and more reliable without sacrificing quality. HybridNorm could help build better-performing AI systems for tasks like language understanding, translation, and other machine learning applications, while also saving time and computational resources
Abstract
Transformers have become the de facto architecture for a wide range of machine learning tasks, particularly in large language models (LLMs). Despite their remarkable performance, challenges remain in training deep transformer networks, especially regarding the location of layer normalization. While Pre-Norm structures facilitate easier training due to their more prominent identity path, they often yield suboptimal performance compared to Post-Norm. In this paper, we propose HybridNorm, a straightforward yet effective hybrid normalization strategy that integrates the advantages of both Pre-Norm and Post-Norm approaches. Specifically, HybridNorm employs QKV normalization within the attention mechanism and Post-Norm in the feed-forward network (FFN) of each transformer block. This design not only stabilizes training but also enhances performance, particularly in the context of LLMs. Comprehensive experiments in both dense and sparse architectures show that HybridNorm consistently outperforms both Pre-Norm and Post-Norm approaches, achieving state-of-the-art results across various benchmarks. These findings highlight the potential of HybridNorm as a more stable and effective technique for improving the training and performance of deep transformer models. %Code will be made publicly available. Code is available at https://github.com/BryceZhuo/HybridNorm.