EvTexture: Event-driven Texture Enhancement for Video Super-Resolution
Dachun Kai, Jiayao Lu, Yueyi Zhang, Xiaoyan Sun
2024-06-24

Summary
This paper presents EvTexture, a new method for improving video quality by enhancing textures in low-resolution videos using event-based vision technology.
What's the problem?
Video super-resolution (VSR) is a process that aims to increase the resolution of videos, making them clearer and more detailed. However, traditional methods often struggle to maintain texture quality and detail, especially when dealing with fast-moving scenes or complex textures. This can result in blurry or unrealistic images, which is not ideal for high-quality video production.
What's the solution?
The researchers developed EvTexture, the first VSR method that uses event signals—data captured at a very high speed that highlights changes in the scene—to enhance the texture of videos. Their approach includes a new branch specifically for texture enhancement and an iterative module that gradually improves texture details over multiple steps. This means that instead of trying to fix everything at once, the method refines the textures bit by bit, leading to better quality results. Their experiments showed that EvTexture outperforms previous methods on four different datasets, achieving significant improvements in image quality.
Why it matters?
This research is important because it introduces a novel way to enhance video quality by focusing on textures, which are crucial for making videos look realistic and appealing. By using event-based vision technology, EvTexture can produce clearer and more detailed videos, which can benefit industries like film, gaming, and virtual reality where high-quality visuals are essential.
Abstract
Event-based vision has drawn increasing attention due to its unique characteristics, such as high temporal resolution and high dynamic range. It has been used in video super-resolution (VSR) recently to enhance the flow estimation and temporal alignment. Rather than for motion learning, we propose in this paper the first VSR method that utilizes event signals for texture enhancement. Our method, called EvTexture, leverages high-frequency details of events to better recover texture regions in VSR. In our EvTexture, a new texture enhancement branch is presented. We further introduce an iterative texture enhancement module to progressively explore the high-temporal-resolution event information for texture restoration. This allows for gradual refinement of texture regions across multiple iterations, leading to more accurate and rich high-resolution details. Experimental results show that our EvTexture achieves state-of-the-art performance on four datasets. For the Vid4 dataset with rich textures, our method can get up to 4.67dB gain compared with recent event-based methods. Code: https://github.com/DachunKai/EvTexture.