Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models
Zhijian Zhuo, Ya Wang, Yutao Zeng, Xiaoqing Li, Xun Zhou, Jinwen Ma
2024-11-07

Summary
This paper introduces a new type of activation function called Polynomial Composition Activations (PolyCom) that improves the performance of large language models (LLMs) by allowing them to better understand complex data patterns.
What's the problem?
Traditional activation functions like ReLU (Rectified Linear Unit) are commonly used in neural networks, but they can limit the model's ability to capture complex relationships in data. This limitation is especially problematic for transformers, which need to understand intricate dependencies in language and other data types.
What's the solution?
The authors propose PolyCom, a novel activation function that combines polynomial functions with other types of functions. This new approach enhances the model's expressiveness, allowing it to learn and represent more complex patterns in data. They conducted experiments showing that using PolyCom leads to better accuracy and faster learning in LLMs compared to traditional activation functions.
Why it matters?
This research matters because it provides a way to improve how AI models process information, making them more effective at tasks like language understanding and generation. By enhancing the capabilities of LLMs, PolyCom could lead to advancements in various applications, including natural language processing, image recognition, and more.
Abstract
Transformers have found extensive applications across various domains due to the powerful fitting capabilities. This success can be partially attributed to their inherent nonlinearity. Thus, in addition to the ReLU function employed in the original transformer architecture, researchers have explored alternative modules such as GeLU and SwishGLU to enhance nonlinearity and thereby augment representational capacity. In this paper, we propose a novel category of polynomial composition activations (PolyCom), designed to optimize the dynamics of transformers. Theoretically, we provide a comprehensive mathematical analysis of PolyCom, highlighting its enhanced expressivity and efficacy relative to other activation functions. Notably, we demonstrate that networks incorporating PolyCom achieve the optimal approximation rate, indicating that PolyCom networks require minimal parameters to approximate general smooth functions in Sobolev spaces. We conduct empirical experiments on the pre-training configurations of large language models (LLMs), including both dense and sparse architectures. By substituting conventional activation functions with PolyCom, we enable LLMs to capture higher-order interactions within the data, thus improving performance metrics in terms of accuracy and convergence rates. Extensive experimental results demonstrate the effectiveness of our method, showing substantial improvements over other activation functions. Code is available at https://github.com/BryceZhuo/PolyCom.