< Explain other AI papers

Self-Supervised Learning of Motion Concepts by Optimizing Counterfactuals

Stefan Stojanov, David Wendt, Seungwoo Kim, Rahul Venkatesh, Kevin Feigelis, Jiajun Wu, Daniel LK Yamins

2025-03-27

Self-Supervised Learning of Motion Concepts by Optimizing
  Counterfactuals

Summary

This paper is about teaching computers to understand how things move in videos without needing human-labeled examples.

What's the problem?

Current methods for understanding motion in videos often rely on fake training data or require a lot of tweaking, which limits their ability to work in the real world.

What's the solution?

The researchers developed a new method that allows the computer to learn about motion by experimenting with different possibilities and figuring out what would have happened if things were slightly different.

Why it matters?

This work matters because it can help computers better understand the world around them, which is important for applications like self-driving cars and robots.

Abstract

Estimating motion in videos is an essential computer vision problem with many downstream applications, including controllable video generation and robotics. Current solutions are primarily trained using synthetic data or require tuning of situation-specific heuristics, which inherently limits these models' capabilities in real-world contexts. Despite recent developments in large-scale self-supervised learning from videos, leveraging such representations for motion estimation remains relatively underexplored. In this work, we develop Opt-CWM, a self-supervised technique for flow and occlusion estimation from a pre-trained next-frame prediction model. Opt-CWM works by learning to optimize counterfactual probes that extract motion information from a base video model, avoiding the need for fixed heuristics while training on unrestricted video inputs. We achieve state-of-the-art performance for motion estimation on real-world videos while requiring no labeled data.