FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution
Junhao Zhuang, Shi Guo, Xin Cai, Xiaohui Li, Yihao Liu, Chun Yuan, Tianfan Xue
2025-10-15
Summary
This paper introduces FlashVSR, a new method for improving the quality of videos, specifically making them higher resolution. It focuses on making this process faster and more efficient than existing techniques that use diffusion models.
What's the problem?
While diffusion models are good at restoring videos, they're usually too slow and require a lot of computing power to work well, especially when trying to create very high-resolution videos from low-resolution sources. They also don't always perform consistently when faced with videos that are different from what they were trained on.
What's the solution?
The researchers developed FlashVSR, which uses a few key ideas to speed things up. First, they used a special training process to make the model work in a single step, allowing for faster processing. Second, they focused the model's attention on only the important parts of the video, reducing unnecessary calculations. Finally, they created a smaller, faster part of the model to quickly reconstruct the final high-resolution video. They also created a large new dataset of videos to help train the model effectively.
Why it matters?
This work is important because it makes high-quality video enhancement much more practical. FlashVSR is significantly faster than previous methods, achieving real-time performance on powerful hardware, and it can handle very high resolutions. This opens the door for wider use of diffusion models in video editing, restoration, and other applications where speed and quality are crucial.
Abstract
Diffusion models have recently advanced video restoration, but applying them to real-world video super-resolution (VSR) remains challenging due to high latency, prohibitive computation, and poor generalization to ultra-high resolutions. Our goal in this work is to make diffusion-based VSR practical by achieving efficiency, scalability, and real-time performance. To this end, we propose FlashVSR, the first diffusion-based one-step streaming framework towards real-time VSR. FlashVSR runs at approximately 17 FPS for 768x1408 videos on a single A100 GPU by combining three complementary innovations: (i) a train-friendly three-stage distillation pipeline that enables streaming super-resolution, (ii) locality-constrained sparse attention that cuts redundant computation while bridging the train-test resolution gap, and (iii) a tiny conditional decoder that accelerates reconstruction without sacrificing quality. To support large-scale training, we also construct VSR-120K, a new dataset with 120k videos and 180k images. Extensive experiments show that FlashVSR scales reliably to ultra-high resolutions and achieves state-of-the-art performance with up to 12x speedup over prior one-step diffusion VSR models. We will release the code, pretrained models, and dataset to foster future research in efficient diffusion-based VSR.