FutureOmni: Evaluating Future Forecasting from Omni-Modal Context for Multimodal LLMs
Qian Chen, Jinlan Fu, Changsong Li, See-Kiong Ng, Xipeng Qiu
2026-01-21
Summary
This paper introduces a new way to test how well AI models can predict what will happen next in videos with both pictures and sound, going beyond just understanding what *has* happened.
What's the problem?
Current AI models, even the really advanced ones that handle images, video, and sound together, aren't very good at predicting future events based on what they see and hear. Existing tests mostly check if the AI understands things that already occurred, not what's likely to happen next. This means we don't really know how well these models can reason about cause and effect over time using multiple types of information.
What's the solution?
The researchers created a new test called FutureOmni, which includes almost a thousand videos and over a thousand questions designed to challenge AI to predict future events. They also built a new dataset of 7,000 examples and a training method called Omni-Modal Future Forecasting (OFF) to help improve the AI's ability to make these predictions. They then tested existing AI models and showed that their new training method significantly improves performance on this task.
Why it matters?
Being able to predict the future from what we see and hear is a crucial skill for AI, especially for things like self-driving cars or robots interacting with the world. This research highlights a weakness in current AI systems and provides a path towards building more intelligent and proactive AI that can anticipate and respond to events in real-time.
Abstract
Although Multimodal Large Language Models (MLLMs) demonstrate strong omni-modal perception, their ability to forecast future events from audio-visual cues remains largely unexplored, as existing benchmarks focus mainly on retrospective understanding. To bridge this gap, we introduce FutureOmni, the first benchmark designed to evaluate omni-modal future forecasting from audio-visual environments. The evaluated models are required to perform cross-modal causal and temporal reasoning, as well as effectively leverage internal knowledge to predict future events. FutureOmni is constructed via a scalable LLM-assisted, human-in-the-loop pipeline and contains 919 videos and 1,034 multiple-choice QA pairs across 8 primary domains. Evaluations on 13 omni-modal and 7 video-only models show that current systems struggle with audio-visual future prediction, particularly in speech-heavy scenarios, with the best accuracy of 64.8% achieved by Gemini 3 Flash. To mitigate this limitation, we curate a 7K-sample instruction-tuning dataset and propose an Omni-Modal Future Forecasting (OFF) training strategy. Evaluations on FutureOmni and popular audio-visual and video-only benchmarks demonstrate that OFF enhances future forecasting and generalization. We publicly release all code (https://github.com/OpenMOSS/FutureOmni) and datasets (https://huggingface.co/datasets/OpenMOSS-Team/FutureOmni).