< Explain other AI papers

DA-Flow: Degradation-Aware Optical Flow Estimation with Diffusion Models

Jaewon Min, Jaeeun Lee, Yeji Choi, Paul Hyunbin Cho, Jin Hyeon Kim, Tae-Young Lee, Jongsik Ahn, Hwayeong Lee, Seonghyun Park, Seungryong Kim

2026-03-25

DA-Flow: Degradation-Aware Optical Flow Estimation with Diffusion Models

Summary

This paper introduces a new approach to calculating optical flow, which is essentially how computers 'see' motion in videos, making it more reliable when dealing with real-world video quality issues.

What's the problem?

Optical flow algorithms usually work really well when trained on perfect, clean videos. However, real-world videos are often blurry, noisy, or compressed, which messes up these algorithms and makes them inaccurate. Existing methods struggle to maintain accuracy when videos aren't ideal.

What's the solution?

The researchers realized that image restoration models, specifically those using 'diffusion' techniques, are good at handling imperfections in images. They combined these restoration models with traditional optical flow methods. The key was to make the restoration model consider motion by allowing it to look at multiple frames at once, creating a system called DA-Flow that iteratively refines its motion estimates by blending the strengths of both approaches.

Why it matters?

This work is important because it makes optical flow more practical for real-world applications like self-driving cars, video editing, and robotics. By being robust to common video corruptions, the new method can provide more reliable motion information, leading to better performance in these areas.

Abstract

Optical flow models trained on high-quality data often degrade severely when confronted with real-world corruptions such as blur, noise, and compression artifacts. To overcome this limitation, we formulate Degradation-Aware Optical Flow, a new task targeting accurate dense correspondence estimation from real-world corrupted videos. Our key insight is that the intermediate representations of image restoration diffusion models are inherently corruption-aware but lack temporal awareness. To address this limitation, we lift the model to attend across adjacent frames via full spatio-temporal attention, and empirically demonstrate that the resulting features exhibit zero-shot correspondence capabilities. Based on this finding, we present DA-Flow, a hybrid architecture that fuses these diffusion features with convolutional features within an iterative refinement framework. DA-Flow substantially outperforms existing optical flow methods under severe degradation across multiple benchmarks.