ExoActor: Exocentric Video Generation as Generalizable Interactive Humanoid Control
Yanghao Zhou, Jingyu Ma, Yibo Peng, Zhenguo Sun, Yu Bai, Börje F. Karlsson
2026-05-01
Summary
This paper introduces a new way to control humanoid robots by using artificial intelligence that's really good at creating videos. It aims to make robots act more naturally when interacting with the world and completing tasks.
What's the problem?
Getting robots to perform complex tasks that require interacting with objects and understanding their surroundings is really hard. Traditional methods struggle because they need a lot of specific information about everything happening at once – where things are, how they change over time, what the robot is doing, and what the overall goal is. It's difficult to teach a robot all of this in a way that allows it to adapt to new situations.
What's the solution?
The researchers developed a system called ExoActor. It works by first having the AI *imagine* how a person would complete a task in a given environment, and it does this by generating a video. Then, it takes the movements from that video and translates them into instructions for the robot to follow. Essentially, the robot learns by watching a simulated human perform the task, and it doesn't need a ton of real-world examples to learn new things.
Why it matters?
This research is important because it offers a more flexible and scalable way to program robots. Instead of painstakingly coding every possible interaction, we can leverage the power of AI video generation to let robots learn from simulated experiences. This could lead to robots that are much better at handling real-world tasks and adapting to unexpected situations, bringing us closer to truly intelligent and helpful humanoid robots.
Abstract
Humanoid control systems have made significant progress in recent years, yet modeling fluent interaction-rich behavior between a robot, its surrounding environment, and task-relevant objects remains a fundamental challenge. This difficulty arises from the need to jointly capture spatial context, temporal dynamics, robot actions, and task intent at scale, which is a poor match to conventional supervision. We propose ExoActor, a novel framework that leverages the generalization capabilities of large-scale video generation models to address this problem. The key insight in ExoActor is to use third-person video generation as a unified interface for modeling interaction dynamics. Given a task instruction and scene context, ExoActor synthesizes plausible execution processes that implicitly encode coordinated interactions between robot, environment, and objects. Such video output is then transformed into executable humanoid behaviors through a pipeline that estimates human motion and executes it via a general motion controller, yielding a task-conditioned behavior sequence. To validate the proposed framework, we implement it as an end-to-end system and demonstrate its generalization to new scenarios without additional real-world data collection. Furthermore, we conclude by discussing limitations of the current implementation and outlining promising directions for future research, illustrating how ExoActor provides a scalable approach to modeling interaction-rich humanoid behaviors, potentially opening a new avenue for generative models to advance general-purpose humanoid intelligence.