< Explain other AI papers

ROOM: A Physics-Based Continuum Robot Simulator for Photorealistic Medical Datasets Generation

Salvatore Esposito, Matías Mattamala, Daniel Rebain, Francis Xiatian Zhang, Kevin Dhaliwal, Mohsen Khadem, Subramanian Ramamoorthy

2025-09-17

ROOM: A Physics-Based Continuum Robot Simulator for Photorealistic Medical Datasets Generation

Summary

This paper introduces a new computer simulation called ROOM, designed to create realistic training data for doctors using flexible, steerable robots to navigate the lungs during bronchoscopy procedures.

What's the problem?

Currently, it's really hard to develop and test these robotic systems because getting real-world data from inside a patient's lungs is difficult and raises ethical concerns. You can't just practice on patients! Also, creating computer programs that allow the robot to work on its own requires a lot of realistic data, including what the doctor *sees* through the camera and how the robot physically interacts with the lung tissue, and that data is hard to come by.

What's the solution?

The researchers built ROOM, which takes CT scans of patients' lungs and turns them into incredibly realistic simulations. This simulation doesn't just create images; it generates multiple types of data, like color images with realistic imperfections, depth information, and even how things move, all at the scale a doctor would see during a procedure. They then tested this simulation by seeing how well existing computer programs could perform tasks like figuring out where the robot is and how far away things are, and found that the simulation presented realistic challenges. They also showed that they could improve these programs by training them on data created by ROOM.

Why it matters?

ROOM is important because it allows researchers to create a huge amount of training data for these lung-navigating robots without needing to rely on limited and difficult-to-obtain real patient data. This will speed up the development of better robotic tools for diagnosing and treating lung diseases, and ultimately improve patient care.

Abstract

Continuum robots are advancing bronchoscopy procedures by accessing complex lung airways and enabling targeted interventions. However, their development is limited by the lack of realistic training and test environments: Real data is difficult to collect due to ethical constraints and patient safety concerns, and developing autonomy algorithms requires realistic imaging and physical feedback. We present ROOM (Realistic Optical Observation in Medicine), a comprehensive simulation framework designed for generating photorealistic bronchoscopy training data. By leveraging patient CT scans, our pipeline renders multi-modal sensor data including RGB images with realistic noise and light specularities, metric depth maps, surface normals, optical flow and point clouds at medically relevant scales. We validate the data generated by ROOM in two canonical tasks for medical robotics -- multi-view pose estimation and monocular depth estimation, demonstrating diverse challenges that state-of-the-art methods must overcome to transfer to these medical settings. Furthermore, we show that the data produced by ROOM can be used to fine-tune existing depth estimation models to overcome these challenges, also enabling other downstream applications such as navigation. We expect that ROOM will enable large-scale data generation across diverse patient anatomies and procedural scenarios that are challenging to capture in clinical settings. Code and data: https://github.com/iamsalvatore/room.