< Explain other AI papers

OmniNWM: Omniscient Driving Navigation World Models

Bohan Li, Zhuang Ma, Dalong Du, Baorui Peng, Zhujin Liang, Zhenqiang Liu, Chao Ma, Yueming Jin, Hao Zhao, Wenjun Zeng, Xin Jin

2025-10-23

OmniNWM: Omniscient Driving Navigation World Models

Summary

This paper introduces a new type of computer model, called OmniNWM, designed to help self-driving cars understand and interact with the world around them. It aims to create a complete 'world model' that considers what the car 'sees', what actions it can take, and the 'rewards' it gets for good driving.

What's the problem?

Current self-driving car models are limited in a few key ways. They often only focus on a small part of what the car perceives, can't look very far into the future, struggle with precise control, and don't really understand what makes driving 'good' – like staying safe and following rules. They treat these aspects separately instead of as one unified system.

What's the solution?

OmniNWM solves these problems by creating a single model that handles everything at once. It generates detailed, 360-degree views of the environment, including color images, what objects are present, distances to objects, and a 3D map. It also uses a new way to represent the car's movements, allowing for very accurate control of the simulated driving. Finally, instead of trying to *learn* what's rewarding, it directly uses the 3D map to define rewards based on safe and rule-following behavior.

Why it matters?

This research is important because it represents a significant step towards more realistic and reliable self-driving car technology. By creating a more complete and accurate world model, OmniNWM can help cars make better decisions, navigate more safely, and plan for the long term. The ability to directly evaluate driving performance using the generated 3D environment also provides a way to test and improve these systems.

Abstract

Autonomous driving world models are expected to work effectively across three core dimensions: state, action, and reward. Existing models, however, are typically restricted to limited state modalities, short video sequences, imprecise action control, and a lack of reward awareness. In this paper, we introduce OmniNWM, an omniscient panoramic navigation world model that addresses all three dimensions within a unified framework. For state, OmniNWM jointly generates panoramic videos of RGB, semantics, metric depth, and 3D occupancy. A flexible forcing strategy enables high-quality long-horizon auto-regressive generation. For action, we introduce a normalized panoramic Plucker ray-map representation that encodes input trajectories into pixel-level signals, enabling highly precise and generalizable control over panoramic video generation. Regarding reward, we move beyond learning reward functions with external image-based models: instead, we leverage the generated 3D occupancy to directly define rule-based dense rewards for driving compliance and safety. Extensive experiments demonstrate that OmniNWM achieves state-of-the-art performance in video generation, control accuracy, and long-horizon stability, while providing a reliable closed-loop evaluation framework through occupancy-grounded rewards. Project page is available at https://github.com/Arlo0o/OmniNWM.