Efficient Model Selection for Time Series Forecasting via LLMs
Wang Wei, Tiankai Yang, Hongjie Chen, Ryan A. Rossi, Yue Zhao, Franck Dernoncourt, Hoda Eldardiry
2025-04-04
Summary
This paper is about using AI language models to automatically pick the best method for predicting future trends in data.
What's the problem?
Choosing the right method for predicting future trends usually takes a lot of time and effort because you have to test different methods on different datasets.
What's the solution?
The researchers propose using AI language models to select the best method. These models can use their existing knowledge to make good choices without needing a lot of testing.
Why it matters?
This work matters because it can save time and resources in predicting future trends, making the process more efficient.
Abstract
Model selection is a critical step in time series forecasting, traditionally requiring extensive performance evaluations across various datasets. Meta-learning approaches aim to automate this process, but they typically depend on pre-constructed performance matrices, which are costly to build. In this work, we propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection. Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini, we demonstrate that our approach outperforms traditional meta-learning techniques and heuristic baselines, while significantly reducing computational overhead. These findings underscore the potential of LLMs in efficient model selection for time series forecasting.