< Explain other AI papers

NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking

Daniel Dauner, Marcel Hallgarten, Tianyu Li, Xinshuo Weng, Zhiyu Huang, Zetong Yang, Hongyang Li, Igor Gilitschenski, Boris Ivanovic, Marco Pavone, Andreas Geiger, Kashyap Chitta

2024-06-24

NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking

Summary

This paper introduces NAVSIM, a new simulation tool designed to test and evaluate autonomous vehicle systems in a controlled way. It combines real-world driving data with a non-reactive simulation environment to benchmark how well these vehicles perform without the unpredictability of other drivers.

What's the problem?

Evaluating how well autonomous vehicles drive is complicated. Traditional methods either use real data, which can be inconsistent because of other drivers' behaviors, or simulations that are hard to scale and may not accurately reflect real-world conditions. This makes it difficult to draw clear conclusions about the performance of different driving algorithms.

What's the solution?

NAVSIM provides a solution by creating a non-reactive simulation environment where other vehicles do not change their behavior based on the autonomous vehicle's actions. This allows researchers to measure the performance of the autonomous system more accurately. The authors used data from real-world driving scenarios to create these simulations and developed metrics to evaluate how well the vehicles handle various driving situations.

Why it matters?

This work is important because it offers a more reliable way to benchmark autonomous vehicles, helping researchers understand their strengths and weaknesses. By using NAVSIM, developers can improve their algorithms and ultimately make autonomous driving safer and more effective. The introduction of this tool could accelerate advancements in autonomous vehicle technology.

Abstract

Benchmarking vision-based driving policies is challenging. On one hand, open-loop evaluation with real data is easy, but these results do not reflect closed-loop performance. On the other, closed-loop evaluation is possible in simulation, but is hard to scale due to its significant computational demands. Further, the simulators available today exhibit a large domain gap to real data. This has resulted in an inability to draw clear conclusions from the rapidly growing body of research on end-to-end autonomous driving. In this paper, we present NAVSIM, a middle ground between these evaluation paradigms, where we use large datasets in combination with a non-reactive simulator to enable large-scale real-world benchmarking. Specifically, we gather simulation-based metrics, such as progress and time to collision, by unrolling bird's eye view abstractions of the test scenes for a short simulation horizon. Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other. As we demonstrate empirically, this decoupling allows open-loop metric computation while being better aligned with closed-loop evaluations than traditional displacement errors. NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights. On a large set of challenging scenarios, we observe that simple methods with moderate compute requirements such as TransFuser can match recent large-scale end-to-end driving architectures such as UniAD. Our modular framework can potentially be extended with new datasets, data curation strategies, and metrics, and will be continually maintained to host future challenges. Our code is available at https://github.com/autonomousvision/navsim.