FAN: Fourier Analysis Networks
Yihong Dong, Ge Li, Yongding Tao, Xue Jiang, Kechi Zhang, Jia Li, Jing Su, Jun Zhang, Jingjing Xu
2024-10-08

Summary
This paper introduces FAN, or Fourier Analysis Networks, a new type of neural network designed to better understand and predict periodic patterns in data, which is important for various applications.
What's the problem?
Traditional neural networks, like Multi-Layer Perceptrons (MLPs) and Transformers, often struggle with periodic data. Instead of truly understanding the repeating patterns in this data, they tend to memorize it. This can lead to inaccuracies when trying to generalize or make predictions based on periodicity, which is a key feature in many real-world systems like weather patterns or stock prices.
What's the solution?
To address this issue, the authors developed FAN, which incorporates principles from Fourier Analysis directly into the neural network's structure. By using Fourier Series, FAN can naturally model periodic phenomena more effectively. This allows it to make more accurate predictions about periodic patterns while using fewer resources compared to traditional MLPs. The paper shows that FAN performs better than existing models on various tasks, including time series forecasting and symbolic formula representation.
Why it matters?
This research is significant because it enhances the ability of neural networks to work with periodic data, which is crucial in many fields such as finance, science, and engineering. By improving how these models understand and predict patterns, FAN could lead to better decision-making tools and more accurate forecasts in real-world applications.
Abstract
Despite the remarkable success achieved by neural networks, particularly those represented by MLP and Transformer, we reveal that they exhibit potential flaws in the modeling and reasoning of periodicity, i.e., they tend to memorize the periodic data rather than genuinely understanding the underlying principles of periodicity. However, periodicity is a crucial trait in various forms of reasoning and generalization, underpinning predictability across natural and engineered systems through recurring patterns in observations. In this paper, we propose FAN, a novel network architecture based on Fourier Analysis, which empowers the ability to efficiently model and reason about periodic phenomena. By introducing Fourier Series, the periodicity is naturally integrated into the structure and computational processes of the neural network, thus achieving a more accurate expression and prediction of periodic patterns. As a promising substitute to multi-layer perceptron (MLP), FAN can seamlessly replace MLP in various models with fewer parameters and FLOPs. Through extensive experiments, we demonstrate the effectiveness of FAN in modeling and reasoning about periodic functions, and the superiority and generalizability of FAN across a range of real-world tasks, including symbolic formula representation, time series forecasting, and language modeling.