< Explain other AI papers

RADIO-ViPE: Online Tightly Coupled Multi-Modal Fusion for Open-Vocabulary Semantic SLAM in Dynamic Environments

Zaid Nasser, Mikhail Iumanov, Tianhao Li, Maxim Popov, Jaafar Mahmoud, Sergey Kolyubin

2026-04-30

RADIO-ViPE: Online Tightly Coupled Multi-Modal Fusion for Open-Vocabulary Semantic SLAM in Dynamic Environments

Summary

This paper introduces RADIO-ViPE, a new system that allows robots or programs to understand what you're asking about in a video and connect those requests to specific objects or areas in a 3D environment, even if the environment is changing.

What's the problem?

Existing systems that try to do this kind of 'semantic SLAM' – building a map and understanding what's in it based on language – usually need really good, pre-prepared video data with precise camera information and assume the world around them isn't moving. This makes them hard to use in real-world situations where you only have a regular video camera and things are constantly changing, like someone moving furniture while you're recording.

What's the solution?

RADIO-ViPE solves this by working directly with normal video from a single camera, without needing any special setup or pre-existing knowledge about the camera. It combines what the camera 'sees' with what you 'say' (your language query) using advanced AI models. It then builds a 3D map and constantly updates it, even if objects are moving or the scene is being rearranged, making sure everything stays consistent. It uses a clever optimization technique to handle these changes effectively.

Why it matters?

This is a big step forward because it makes these kinds of 'understanding' systems much more practical for real-world robots and applications. Imagine a robot that can follow your instructions like 'bring me the red cup' even if someone has moved things around since it last saw the room, or a program that can analyze a video and pinpoint exactly where something happens based on your spoken question. RADIO-ViPE makes that possible.

Abstract

We present RADIO-ViPE (Reduce All Domains Into One -- Video Pose Engine), an online semantic SLAM system that enables geometry-aware open-vocabulary grounding, associating arbitrary natural language queries with localized 3D regions and objects in dynamic environments. Unlike existing approaches that require calibrated, posed RGB-D input, RADIO-ViPE operates directly on raw monocular RGB video streams, requiring no prior camera intrinsics, depth sensors, or pose initialization. The system tightly couples multi-modal embeddings -- spanning vision and language -- derived from agglomerative foundation models (e.g., RADIO) with geometric scene information. This coupling takes place in initialization, optimization and factor graph connections to improve the consistency of the map from multiple modalities. The optimization is wrapped within adaptive robust kernels, designed to handle both actively moving objects and agent-displaced scene elements (e.g., furniture rearranged during ego-centric session). Experiments demonstrate that RADIO-ViPE achieves state-of-the-art results on the dynamic TUM-RGBD benchmark while maintaining competitive performance against offline open-vocabulary methods that rely on calibrated data and static scene assumptions. RADIO-ViPE bridges a critical gap in real-world deployment, enabling robust open-vocabulary semantic grounding for autonomous robotics and unconstrained in-the-wild video streams. Project page: https://be2rlab.github.io/radio_vipe