< Explain other AI papers

AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting

Abdelhakim Benechehab, Vasilii Feofanov, Giuseppe Paolo, Albert Thomas, Maurizio Filippone, Balázs Kégl

2025-02-17

AdaPTS: Adapting Univariate Foundation Models to Probabilistic
  Multivariate Time Series Forecasting

Summary

This paper talks about AdaPTS, a new way to make AI models that are good at predicting single types of data work better with multiple types of data at once, especially for forecasting future trends.

What's the problem?

Current AI models that are pre-trained to predict future trends (called foundation models) are really good at working with one type of data at a time, like stock prices. But in the real world, we often need to look at many different types of data together to make accurate predictions. These models also struggle to show how sure they are about their predictions, which is important for making decisions.

What's the solution?

The researchers created special tools called adapters that help the AI models work with multiple types of data at once. These adapters take all the different data and transform it in a way that the AI can understand and use. They also made the AI better at showing how confident it is in its predictions. They tested their method on both made-up data and real-world information to show that it works well.

Why it matters?

This matters because it could make AI forecasting tools much more useful in real-world situations where we need to look at many different factors to predict the future. For example, it could help businesses make better decisions about inventory or help scientists make more accurate weather predictions. By making these AI models work with multiple types of data and show how sure they are, AdaPTS could help people trust and use AI forecasting in more important decisions.

Abstract

Pre-trained foundation models (FMs) have shown exceptional performance in univariate time series forecasting tasks. However, several practical challenges persist, including managing intricate dependencies among features and quantifying uncertainty in predictions. This study aims to tackle these critical limitations by introducing adapters; <PRE_TAG>feature-space transformations</POST_TAG> that facilitate the effective use of pre-trained univariate time series FMs for multivariate tasks. Adapters operate by projecting multivariate inputs into a suitable latent space and applying the FM independently to each dimension. Inspired by the literature on representation learning and partially stochastic Bayesian neural networks, we present a range of adapters and optimization/inference strategies. Experiments conducted on both synthetic and real-world datasets confirm the efficacy of adapters, demonstrating substantial enhancements in forecasting accuracy and uncertainty quantification compared to baseline methods. Our framework, AdaPTS, positions adapters as a modular, scalable, and effective solution for leveraging time series FMs in multivariate contexts, thereby promoting their wider adoption in real-world applications. We release the code at https://github.com/abenechehab/AdaPTS.