< Explain other AI papers

MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment

Ruicheng Zhang, Mingyang Zhang, Jun Zhou, Zhangrui Guo, Xiaofan Liu, Zunnan Xu, Zhizhou Zhong, Puxin Yan, Haocheng Luo, Xiu Li

2025-12-10

MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment

Summary

This paper introduces a new system called MIND-V that creates realistic, long videos of robots doing complex tasks, like manipulating objects. It aims to overcome the difficulty of getting enough real-world data to teach robots how to perform these actions.

What's the problem?

Teaching robots through 'imitation learning' – where they learn by watching – is hard because we don't have enough video data of robots doing a wide variety of tasks for extended periods. Existing methods can only create short, simple videos and often need someone to manually program the robot's movements beforehand. Basically, it's tough to get robots to learn complex, real-world skills from just watching because the available training videos aren't good enough.

What's the solution?

MIND-V tackles this by breaking down the process into three main parts. First, it uses a 'Semantic Reasoning Hub' to understand the overall goal of the task. Then, a 'Behavioral Semantic Bridge' translates that goal into instructions the robot can understand, regardless of the specific robot being used. Finally, a 'Motor Video Generator' actually creates the video of the robot performing the task. To make the videos more realistic and avoid errors, they also use a technique called 'Staged Visual Future Rollouts' and a 'GRPO reinforcement learning' phase that checks if the robot's actions follow the laws of physics, using a world model to predict what *should* happen.

Why it matters?

This research is important because it provides a way to automatically generate large amounts of realistic robot training data. This means we can teach robots more complex skills without needing to spend a lot of time and money recording real-world examples. It’s a step towards making robots more adaptable and capable of performing useful tasks in the real world, and it offers a scalable way to create data for embodied AI.

Abstract

Embodied imitation learning is constrained by the scarcity of diverse, long-horizon robotic manipulation data. Existing video generation models for this domain are limited to synthesizing short clips of simple actions and often rely on manually defined trajectories. To this end, we introduce MIND-V, a hierarchical framework designed to synthesize physically plausible and logically coherent videos of long-horizon robotic manipulation. Inspired by cognitive science, MIND-V bridges high-level reasoning with pixel-level synthesis through three core components: a Semantic Reasoning Hub (SRH) that leverages a pre-trained vision-language model for task planning; a Behavioral Semantic Bridge (BSB) that translates abstract instructions into domain-invariant representations; and a Motor Video Generator (MVG) for conditional video rendering. MIND-V employs Staged Visual Future Rollouts, a test-time optimization strategy to enhance long-horizon robustness. To align the generated videos with physical laws, we introduce a GRPO reinforcement learning post-training phase guided by a novel Physical Foresight Coherence (PFC) reward. PFC leverages the V-JEPA world model to enforce physical plausibility by aligning the predicted and actual dynamic evolutions in the feature space. MIND-V demonstrates state-of-the-art performance in long-horizon robotic manipulation video generation, establishing a scalable and controllable paradigm for embodied data synthesis.