Force Prompting: Video Generation Models Can Learn and Generalize Physics-based Control Signals
Nate Gillman, Charles Herrmann, Michael Freeman, Daksh Aggarwal, Evan Luo, Deqing Sun, Chen Sun
2025-05-27
Summary
This paper talks about a new technique called force prompting that lets video generation models create videos where objects move and interact in ways that look physically realistic, like how things would actually move if you pushed or pulled them in real life.
What's the problem?
The problem is that most video generation models can make videos that look good, but they don't really understand or follow the rules of physics, so the movements and interactions in their videos can seem fake or unnatural.
What's the solution?
The authors use something called force prompts, which are signals about physical forces, to guide the video generation models. They train these models using videos made in Blender, a 3D animation tool, where the exact physical forces are known. This helps the models learn how to make objects move in a way that matches real-world physics.
Why it matters?
This is important because it allows AI to create much more believable and useful videos, which can help with things like virtual reality, video games, scientific simulations, and even movie special effects. It brings AI-generated videos closer to what we see in the real world.
Abstract
Force prompts enable video generation models to simulate realistic physical interactions using pretrained models and force conditioning from Blender-generated videos.