SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model
Guibin Chen, Dixuan Lin, Jiangping Yang, Youqiang Zhang, Zhengcong Fei, Debang Li, Sheng Chen, Chaofeng Ao, Nuo Pang, Yiming Wang, Yikun Dou, Zheng Chen, Mingyuan Fan, Tuanhui Li, Mingshan Chang, Hao Zhang, Xiaopeng Sun, Jingtao Xu, Yuqiang Xie, Jiahua Wang, Zhiheng Xu, Weiming Xiong
2026-02-26
Summary
This paper introduces SkyReels V4, a new artificial intelligence model that can create and manipulate videos with accompanying audio, all based on different types of instructions you give it.
What's the problem?
Creating high-quality, long videos with synchronized audio is really hard for computers. Existing models often struggle to handle multiple types of input like text, images, or even other videos and audio, and they can be slow and require a lot of computing power to generate videos at cinematic resolutions like 1080p.
What's the solution?
The researchers built SkyReels V4 using a special design called a 'dual stream Multimodal Diffusion Transformer.' Basically, it has two parts working together: one creates the video, and the other creates the audio. Both parts understand text instructions really well, and can also use images, videos, or audio examples as guides. To make it faster and more efficient, the model first creates a low-resolution version of the whole video and then adds detail with separate processes for improving resolution and making the video flow smoothly.
Why it matters?
SkyReels V4 is a big step forward because it's the first model that can do so many things at once – understand different types of input, generate both video and audio together, and handle creating new videos, fixing existing ones, or editing them – all while producing high-quality results at a length and resolution suitable for professional filmmaking.
Abstract
SkyReels V4 is a unified multi modal video foundation model for joint video audio generation, inpainting, and editing. The model adopts a dual stream Multimodal Diffusion Transformer (MMDiT) architecture, where one branch synthesizes video and the other generates temporally aligned audio, while sharing a powerful text encoder based on the Multimodal Large Language Models (MMLM). SkyReels V4 accepts rich multi modal instructions, including text, images, video clips, masks, and audio references. By combining the MMLMs multi modal instruction following capability with in context learning in the video branch MMDiT, the model can inject fine grained visual guidance under complex conditioning, while the audio branch MMDiT simultaneously leverages audio references to guide sound generation. On the video side, we adopt a channel concatenation formulation that unifies a wide range of inpainting style tasks, such as image to video, video extension, and video editing under a single interface, and naturally extends to vision referenced inpainting and editing via multi modal prompts. SkyReels V4 supports up to 1080p resolution, 32 FPS, and 15 second duration, enabling high fidelity, multi shot, cinema level video generation with synchronized audio. To make such high resolution, long-duration generation computationally feasible, we introduce an efficiency strategy: Joint generation of low resolution full sequences and high-resolution keyframes, followed by dedicated super-resolution and frame interpolation models. To our knowledge, SkyReels V4 is the first video foundation model that simultaneously supports multi-modal input, joint video audio generation, and a unified treatment of generation, inpainting, and editing, while maintaining strong efficiency and quality at cinematic resolutions and durations.