MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
Da Xiao, Qingye Meng, Shengping Li, Xingyuan Yuan
2025-02-19
Summary
This paper talks about MUDDFormer, a new way to improve how AI language models called Transformers work. It introduces a method called Multiway Dynamic Dense (MUDD) connections that helps information flow better between different parts of the AI model.
What's the problem?
Current Transformer models have a limitation in how information moves between their layers. They use something called residual connections, which can bottleneck or restrict the flow of information. This makes it harder for the AI to learn and process complex language tasks efficiently.
What's the solution?
The researchers created MUDD connections, which are like smart highways for information in the AI model. Unlike older methods that use fixed routes, MUDD connections can change dynamically based on what the AI is processing at the moment. They work by creating multiple paths for different types of information (query, key, value, and residual) and adjusting these paths on the fly. When added to existing Transformer models, it creates what they call MUDDFormer.
Why it matters?
This matters because MUDDFormer makes AI language models much more efficient and powerful. In tests, it performed as well as much larger models while using far less computing power. For example, a MUDDFormer model with 2.8 billion parameters matched the performance of models with 6.9 billion and even 12 billion parameters in some tasks. This means we could create smarter AI systems that use less energy and resources, making advanced AI more accessible and environmentally friendly.
Abstract
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective method to address the limitations of residual connections and enhance cross-layer information flow in Transformers. Unlike existing dense connection approaches with static and shared connection weights, MUDD generates connection weights dynamically depending on hidden states at each sequence position and for each decoupled input stream (the query, key, value or residual) of a Transformer block. MUDD connections can be seamlessly integrated into any Transformer architecture to create MUDDFormer. Extensive experiments show that MUDDFormer significantly outperforms Transformers across various model architectures and scales in language modeling, achieving the performance of Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B in five-shot settings, while adding only 0.23% parameters and 0.4% computation. Code in JAX and PyTorch and pre-trained models are available at https://github.com/Caiyun-AI/MUDDFormer .