< Explain other AI papers

World Guidance: World Modeling in Condition Space for Action Generation

Yue Su, Sijin Chen, Haixin Shi, Mingyu Liu, Zhengshen Zhang, Ningyuan Huang, Weiheng Zhong, Zhengbang Zhu, Yuxiao Liu, Xihui Liu

2026-02-26

World Guidance: World Modeling in Condition Space for Action Generation

Summary

This paper introduces a new method, called WoG (World Guidance), to help computers understand videos and then perform actions within those videos, like a robot completing a task.

What's the problem?

Current computer systems that try to understand videos and act on them have trouble balancing two important things. They need to quickly understand what *might* happen in the future based on the video, but they also need to pay attention to the small details that are important for doing things precisely. If they focus too much on predicting the future, they miss the details, and if they focus too much on details, they can't plan ahead effectively.

What's the solution?

WoG solves this by creating a simplified 'summary' of what the computer expects to see in the future. Instead of trying to predict everything, it focuses on the most important information and turns it into a compact 'condition'. The computer then learns to predict this simplified condition *alongside* predicting the actions it should take. This way, it builds a good understanding of the world without getting bogged down in unnecessary details, and uses that understanding to choose the right actions.

Why it matters?

This research is important because it allows computers to perform actions in videos more accurately and reliably, even in new and challenging situations. It also shows that the system can learn effectively from watching a lot of videos of people performing tasks, which is a step towards creating robots and AI systems that can interact with the real world more effectively.

Abstract

Leveraging future observation modeling to facilitate action generation presents a promising avenue for enhancing the capabilities of Vision-Language-Action (VLA) models. However, existing approaches struggle to strike a balance between maintaining efficient, predictable future representations and preserving sufficient fine-grained information to guide precise action generation. To address this limitation, we propose WoG (World Guidance), a framework that maps future observations into compact conditions by injecting them into the action inference pipeline. The VLA is then trained to simultaneously predict these compressed conditions alongside future actions, thereby achieving effective world modeling within the condition space for action inference. We demonstrate that modeling and predicting this condition space not only facilitates fine-grained action generation but also exhibits superior generalization capabilities. Moreover, it learns effectively from substantial human manipulation videos. Extensive experiments across both simulation and real-world environments validate that our method significantly outperforms existing methods based on future prediction. Project page is available at: https://selen-suyue.github.io/WoGNet/