< Explain other AI papers

SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling

Yufan He, Pengfei Guo, Mengya Xu, Zhaoshuo Li, Andriy Myronenko, Dillan Imans, Bingjie Liu, Dongren Yang, Mingxue Gu, Yongnan Ji, Yueming Jin, Ren Zhao, Baiyong Shen, Daguang Xu

2025-12-30

SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling

Summary

This research tackles the challenge of teaching surgical robots to perform tasks autonomously, specifically when there isn't much data available to learn from.

What's the problem?

Surgical robots need a lot of examples to learn how to operate, but getting that data – videos of surgeries *and* precise information about what the robot is doing during those surgeries – is really difficult. There are tons of surgical videos out there, but they don't tell us exactly what movements the surgeon made with their instruments. This makes it hard to use standard learning techniques that rely on paired video and action data.

What's the solution?

The researchers created a simulated surgical environment called SurgWorld, along with a dataset called SATA that provides detailed descriptions of surgical actions. SurgWorld can generate realistic surgical videos, and they developed a way to *guess* the robot's movements (kinematics) just by watching these simulated videos. They then used this generated data, combined with a small amount of real surgical data, to train a 'surgical VLA policy' – essentially a program that tells the robot what to do. This policy was then tested on a real surgical robot.

Why it matters?

This work is important because it offers a way to overcome the data shortage problem in surgical robotics. By using simulations and clever techniques to create synthetic data, they can train robots to perform surgeries more effectively, even with limited real-world examples. This could lead to more reliable and adaptable surgical robots in the future, and a path towards robots learning surgical skills more efficiently.

Abstract

Data scarcity remains a fundamental barrier to achieving fully autonomous surgical robots. While large scale vision language action (VLA) models have shown impressive generalization in household and industrial manipulation by leveraging paired video action data from diverse domains, surgical robotics suffers from the paucity of datasets that include both visual observations and accurate robot kinematics. In contrast, vast corpora of surgical videos exist, but they lack corresponding action labels, preventing direct application of imitation learning or VLA training. In this work, we aim to alleviate this problem by learning policy models from SurgWorld, a world model designed for surgical physical AI. We curated the Surgical Action Text Alignment (SATA) dataset with detailed action description specifically for surgical robots. Then we built SurgeWorld based on the most advanced physical AI world model and SATA. It's able to generate diverse, generalizable and realistic surgery videos. We are also the first to use an inverse dynamics model to infer pseudokinematics from synthetic surgical videos, producing synthetic paired video action data. We demonstrate that a surgical VLA policy trained with these augmented data significantly outperforms models trained only on real demonstrations on a real surgical robot platform. Our approach offers a scalable path toward autonomous surgical skill acquisition by leveraging the abundance of unlabeled surgical video and generative world modeling, thus opening the door to generalizable and data efficient surgical robot policies.