Flow Equivariant Recurrent Neural Networks
T. Anderson Keller
2025-08-01
Summary
This paper talks about Flow Equivariant Recurrent Neural Networks (FERNNs), which are improved sequence models that can better understand and predict data that changes over time, like moving objects in videos, by respecting how those changes happen smoothly and naturally.
What's the problem?
The problem is that regular recurrent neural networks (RNNs) don’t handle smoothly moving or flowing data well because they can’t keep up with transformations that happen over time, causing them to lose track of important patterns when things move or change speed.
What's the solution?
FERNNs fix this by incorporating the concept of 'flow equivariance', which means the model processes data as if it is moving along with those changes in time and space. This is done by expanding the model’s internal states to include different possible flows or movements, allowing it to better track and predict sequences involving motion.
Why it matters?
This matters because many real-world data streams, like videos or sensor readings, involve smooth changes over time, and models that can understand these flows more naturally are better at tasks such as predicting future frames in a video or analyzing moving objects, making them more useful for applications like robotics, surveillance, and animation.
Abstract
Equivariant neural network architectures are extended to handle time-parameterized transformations, improving performance in sequence models like RNNs for tasks involving moving stimuli.