< Explain other AI papers

Target-Bench: Can World Models Achieve Mapless Path Planning with Semantic Targets?

Dingrui Wang, Hongyuan Ye, Zhihao Liang, Zhexiao Sun, Zhaowei Lu, Yuchen Zhang, Yuyu Zhao, Yuan Gao, Marvin Seegert, Finn Schäfer, Haotong Qin, Wei Li, Luigi Palmieri, Felix Jahncke, Mattia Piccinini, Johannes Betz

2025-11-25

Target-Bench: Can World Models Achieve Mapless Path Planning with Semantic Targets?

Summary

This research investigates how well current artificial intelligence models that create realistic videos, often called 'world models', can actually be used to help robots plan routes in the real world.

What's the problem?

While these video-generating AI models are getting really good at *looking* real, it's unclear if they understand enough about the world to let a robot figure out how to get from one place to another, especially without a pre-existing map. There wasn't a good way to test this specifically, so it was hard to know how useful these models are for robotics.

What's the solution?

The researchers created a new testing ground called Target-Bench. This includes 450 videos of robots moving around in different environments, with known, correct paths. They then used these videos to test several state-of-the-art world models like Sora and Veo, seeing if the AI could 'watch' the video and then plan a similar route for a robot. They also showed that even a smaller AI model could be significantly improved with a little bit of training using their new dataset.

Why it matters?

This work is important because it shows that current video-generating AI isn't quite ready to be used for robot navigation. It highlights the gap between creating realistic images and actually understanding the world well enough to make plans. However, it also demonstrates that with the right data, even smaller AI models can learn to help robots navigate, paving the way for more intelligent and adaptable robots in the future.

Abstract

While recent world models generate highly realistic videos, their ability to perform robot path planning remains unclear and unquantified. We introduce Target-Bench, the first benchmark specifically designed to evaluate world models on mapless path planning toward semantic targets in real-world environments. Target-Bench provides 450 robot-collected video sequences spanning 45 semantic categories with SLAM-based ground truth trajectories. Our evaluation pipeline recovers camera motion from generated videos and measures planning performance using five complementary metrics that quantify target-reaching capability, trajectory accuracy, and directional consistency. We evaluate state-of-the-art models including Sora 2, Veo 3.1, and the Wan series. The best off-the-shelf model (Wan2.2-Flash) achieves only 0.299 overall score, revealing significant limitations in current world models for robotic planning tasks. We show that fine-tuning an open-source 5B-parameter model on only 325 scenarios from our dataset achieves 0.345 overall score -- an improvement of more than 400% over its base version (0.066) and 15% higher than the best off-the-shelf model. We will open-source the code and dataset.