TransMamba: Flexibly Switching between Transformer and Mamba
Yixing Li, Ruobing Xie, Zhen Yang, Xingwu Sun, Shuaipeng Li, Weidong Han, Zhanhui Kang, Yu Cheng, Chengzhong Xu, Di Wang, Jie Jiang
2025-04-07
Summary
This paper talks about TransMamba, a smart AI system that switches between two brain designs (Transformers and Mamba) to handle both short and long text efficiently, like using a sports car for quick tasks and a truck for heavy loads.
What's the problem?
Transformers struggle with long texts because they get slow, while Mamba models (good for long texts) sometimes mess up context and can't handle multiple tasks well.
What's the solution?
TransMamba combines both systems into one AI brain that automatically picks the best approach for each part of the text, using a 'memory translator' to keep information flowing smoothly between them.
Why it matters?
This matters because it makes AI faster and smarter at tasks like summarizing books or analyzing scientific papers, saving energy while keeping accuracy.
Abstract
Transformers are the cornerstone of modern large language models, but their quadratic computational complexity limits efficiency in long-sequence processing. Recent advancements in Mamba, a state space model (SSM) with linear complexity, offer promising efficiency gains but suffer from unstable contextual learning and multitask generalization. This paper proposes TransMamba, a novel framework that unifies Transformer and Mamba through shared parameter matrices (e.g., QKV and CBx), and thus could dynamically switch between attention and SSM mechanisms at different token lengths and layers. We design the Memory converter to bridge Transformer and Mamba by converting attention outputs into SSM-compatible states, ensuring seamless information flow at TransPoints where the transformation happens. The TransPoint scheduling is also thoroughly explored for further improvements. We conducted extensive experiments demonstrating that TransMamba achieves superior training efficiency and performance compared to baselines, and validated the deeper consistency between Transformer and Mamba paradigms, offering a scalable solution for next-generation sequence modeling.