< Explain other AI papers

JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation

Kai Liu, Jungang Li, Yuchong Sun, Shengqiong Wu, Jianzhang Gao, Daoan Zhang, Wei Zhang, Sheng Jin, Sicheng Yu, Geng Zhan, Jiayi Ji, Fan Zhou, Liang Zheng, Shuicheng Yan, Hao Fei, Tat-Seng Chua

2026-01-01

JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation

Summary

This paper introduces JavisGPT, a new artificial intelligence model that can understand and create content using both audio and video, all at the same time. It's designed to work with both sounds and visuals together, not just one or the other.

What's the problem?

Existing AI models often struggle to truly understand how audio and video connect and change together over time. They might recognize objects in a video or words in speech, but not how they relate to each other or how events unfold. This limits their ability to create realistic or meaningful audio-visual content or answer questions about it accurately.

What's the solution?

The researchers built JavisGPT with a special structure that combines audio and video information effectively. It uses a 'SyncFusion' module to make sure the audio and video stay synchronized, and a 'learnable queries' system to connect the audio-video processing to a generator that creates new content. They also created a huge dataset of audio-video-text conversations, curated with the help of another AI (GPT-4o), to train JavisGPT to understand and respond to complex instructions. The training happened in stages, starting with general understanding and moving towards specific tasks.

Why it matters?

JavisGPT represents a significant step forward in AI's ability to process and generate multimedia content. Because it handles audio and video together so well, it can perform tasks that previous models couldn't, especially those requiring understanding of timing and relationships between sounds and visuals. This could lead to improvements in areas like video editing, automated content creation, and more natural human-computer interaction.

Abstract

This paper presents JavisGPT, the first unified multimodal large language model (MLLM) for Joint Audio-Video (JAV) comprehension and generation. JavisGPT adopts a concise encoder-LLM-decoder architecture, featuring a SyncFusion module for spatio-temporal audio-video fusion and synchrony-aware learnable queries to bridge a pretrained JAV-DiT generator. This design enables temporally coherent video-audio understanding and generation from multimodal instructions. We design an effective three-stage training pipeline consisting of multimodal pretraining, audio-video fine-tuning, and large-scale instruction-tuning, to progressively build multimodal comprehension and generation from existing vision-language models. To support this, we further construct JavisInst-Omni, a high-quality instruction dataset with over 200K GPT-4o-curated audio-video-text dialogues that span diverse and multi-level comprehension and generation scenarios. Extensive experiments on JAV comprehension and generation benchmarks show that JavisGPT outperforms existing MLLMs, particularly in complex and temporally synchronized settings.