< Explain other AI papers

Why Do Transformers Fail to Forecast Time Series In-Context?

Yufa Zhou, Yixiao Wang, Surbhi Goel, Anru R. Zhang

2025-10-15

Why Do Transformers Fail to Forecast Time Series In-Context?

Summary

This paper investigates why powerful AI models called Transformers, which are good at understanding language, surprisingly don't perform as well as simpler methods when trying to predict future values in a series of data points over time, a task known as time series forecasting.

What's the problem?

Even though Transformers are really good at many things, they often lose to much simpler models like linear regression when it comes to predicting what happens next in a time series. Researchers haven't fully understood *why* this happens, and there's a lack of solid theoretical explanation for this strange result. The core issue is that despite their complexity, Transformers aren't necessarily better equipped to handle the patterns found in time series data.

What's the solution?

The researchers used math and theory, specifically looking at how Transformers learn 'in context' – meaning how they use past data to make predictions. They focused on a specific type of Transformer called Linear Self-Attention and a common prediction method called Chain-of-Thought. They proved that these Transformers can't beat simple linear models when predicting future values, and that as you give them more and more past data, they eventually just become equivalent to a linear model. They also showed that the Chain-of-Thought method actually makes predictions worse over time. They backed up these theoretical findings with experiments.

Why it matters?

This work is important because it shows that simply using bigger and more complex AI models doesn't automatically mean better results in time series forecasting. It encourages researchers to think more carefully about the fundamental limitations of these models and to develop new forecasting methods that are specifically designed to handle time series data, rather than just applying existing architectures without considering if they're actually a good fit.

Abstract

Time series forecasting (TSF) remains a challenging and largely unsolved problem in machine learning, despite significant recent efforts leveraging Large Language Models (LLMs), which predominantly rely on Transformer architectures. Empirical evidence consistently shows that even powerful Transformers often fail to outperform much simpler models, e.g., linear models, on TSF tasks; however, a rigorous theoretical understanding of this phenomenon remains limited. In this paper, we provide a theoretical analysis of Transformers' limitations for TSF through the lens of In-Context Learning (ICL) theory. Specifically, under AR(p) data, we establish that: (1) Linear Self-Attention (LSA) models cannot achieve lower expected MSE than classical linear models for in-context forecasting; (2) as the context length approaches to infinity, LSA asymptotically recovers the optimal linear predictor; and (3) under Chain-of-Thought (CoT) style inference, predictions collapse to the mean exponentially. We empirically validate these findings through carefully designed experiments. Our theory not only sheds light on several previously underexplored phenomena but also offers practical insights for designing more effective forecasting architectures. We hope our work encourages the broader research community to revisit the fundamental theoretical limitations of TSF and to critically evaluate the direct application of increasingly sophisticated architectures without deeper scrutiny.