Forecasting Time Series with LLMs via Patch-Based Prompting and Decomposition
Mayank Bumb, Anshul Vemulapalli, Sri Harsha Vardhan Prasad Jella, Anish Gupta, An La, Ryan A. Rossi, Hongjie Chen, Franck Dernoncourt, Nesreen K. Ahmed, Yu Wang
2025-06-17
Summary
This paper talks about PatchInstruct, a new way to help large language models (LLMs) make better forecasts about time series data, like predicting weather or traffic. The method uses special prompting techniques that break down time series into smaller meaningful patches, instruct the model clearly using natural language, and include information from similar series to improve predictions. This approach lets LLMs forecast accurately without needing retraining or complicated model changes, while also running much faster.
What's the problem?
The problem is that using large language models for time series forecasting can be slow and sometimes less accurate, especially when the models have to process a lot of data or understand the connections between different but related time series. Existing methods often need heavy retraining or complex architectures, which take more time and computing resources, making it hard to apply these models efficiently in real situations.
What's the solution?
The solution PatchInstruct offers is to prepare the input data as patches that capture important patterns over time, use clear, structured prompts that guide the LLM on what to do, and add similar time series data when helpful to inform better forecasts. This patch-based tokenization and decomposition method simplifies the task for the LLM and reduces the amount of thinking required. The approach does not change the model or require retraining, making it fast and flexible while still improving accuracy.
Why it matters?
This matters because making time series forecasting faster and more accurate helps with many real-world problems, like predicting the weather or managing traffic more effectively. By enabling large language models to work efficiently without extra training, PatchInstruct makes advanced AI forecasting accessible and practical for many applications where timely and reliable predictions are important.
Abstract
PatchInstruct enhances LLM forecasting quality through specialized prompting methods that include time series decomposition, patch-based tokenization, and similarity-based neighbor augmentation.