VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
Zhongwei Ren, Yunchao Wei, Xun Guo, Yao Zhao, Bingyi Kang, Jiashi Feng, Xiaojie Jin
2025-01-21

Summary
This paper talks about VideoWorld, a new AI system that can learn complex knowledge just by watching videos, without needing any text or labels. It's like teaching a computer to understand the world by showing it lots of videos, similar to how babies learn by observing their surroundings.
What's the problem?
Most current AI systems learn from text or labeled data, which can be limiting and time-consuming to create. It's like trying to teach someone about the world only through books, without letting them experience things firsthand. The researchers wanted to see if an AI could learn complex tasks and knowledge just by watching videos, without any additional explanations or labels.
What's the solution?
The researchers created VideoWorld, an AI system that watches and learns from unlabeled videos. They used a special technique called a Latent Dynamics Model to help the AI understand and predict changes in the videos it watches. They tested VideoWorld on two main tasks: playing the game of Go and controlling robotic arms. Surprisingly, VideoWorld became really good at Go, reaching a professional level, and it also learned to control robotic arms in different situations, all just from watching videos.
Why it matters?
This research matters because it shows a new way for AI to learn that's more like how humans naturally learn - by observing the world around them. It could lead to AI systems that are more flexible and can understand a wider range of tasks without needing lots of specific training data. This could be useful in many fields, from creating smarter robots to developing AI that can better understand and interact with the real world. It's a step towards AI that can learn and adapt more like humans do, potentially making AI more helpful and easier to use in our daily lives.
Abstract
This work explores whether a deep generative model can learn complex knowledge solely from visual input, in contrast to the prevalent focus on text-based models like large language models (LLMs). We develop VideoWorld, an auto-regressive video generation model trained on unlabeled video data, and test its knowledge acquisition abilities in video-based Go and robotic control tasks. Our experiments reveal two key findings: (1) video-only training provides sufficient information for learning knowledge, including rules, reasoning and planning capabilities, and (2) the representation of visual change is crucial for knowledge acquisition. To improve both the efficiency and efficacy of this process, we introduce the Latent Dynamics Model (LDM) as a key component of VideoWorld. Remarkably, VideoWorld reaches a 5-dan professional level in the Video-GoBench with just a 300-million-parameter model, without relying on search algorithms or reward mechanisms typical in reinforcement learning. In robotic tasks, VideoWorld effectively learns diverse control operations and generalizes across environments, approaching the performance of oracle models in CALVIN and RLBench. This study opens new avenues for knowledge acquisition from visual data, with all code, data, and models open-sourced for further research.