Transformers without Normalization
Jiachen Zhu, Xinlei Chen, Kaiming He, Yann LeCun, Zhuang Liu
2025-03-14
Summary
This paper talks about a new way to build AI models (like ChatGPT or image generators) without using normalization layers, which were thought to be essential for good performance.
What's the problem?
Normalization layers in AI models add complexity and might not always be necessary, but removing them usually makes models perform worse.
What's the solution?
The researchers replaced normalization layers with a simple tanh-based trick called Dynamic Tanh (DyT), which adjusts how data flows through the model without complex math.
Why it matters?
This simplifies AI development, making models cheaper to run and easier to design, which could speed up progress in areas like language translation or medical image analysis.
Abstract
Normalization layers are ubiquitous in modern neural networks and have long been considered essential. This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique. We introduce Dynamic Tanh (DyT), an element-wise operation DyT(x) = tanh(alpha x), as a drop-in replacement for normalization layers in Transformers. DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, S-shaped input-output mappings. By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning. We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models. These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep networks.