< Explain other AI papers

GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS

Saman Kazemkhani, Aarav Pandya, Daphne Cornelisse, Brennan Shacklett, Eugene Vinitsky

2024-08-06

GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS

Summary

This paper introduces GPUDrive, a high-speed simulator designed for multi-agent driving simulations that can run at over one million frames per second (FPS), enabling efficient training of AI agents for driving tasks.

What's the problem?

Training AI agents to drive in complex environments typically requires an enormous amount of data and experience, which can take a long time to gather. Traditional methods are slow and inefficient, making it difficult to develop effective multi-agent planners that can operate in real-world scenarios.

What's the solution?

GPUDrive addresses this issue by providing a powerful simulator that generates over a million steps of experience every second. This allows developers to create and test many different driving scenarios quickly. The simulator is built on the Madrona Game Engine and allows users to define complex behaviors for multiple agents. It also supports various sensor types, like LIDAR, to mimic real-world driving conditions. Using GPUDrive, researchers can train reinforcement learning agents rapidly, achieving effective driving performance in just minutes for specific scenes.

Why it matters?

GPUDrive is significant because it accelerates the development of AI systems for autonomous driving. By enabling fast and efficient training of AI agents in diverse scenarios, it can lead to better self-driving technology and improve safety on the roads. This research helps bridge the gap between simulation and real-world applications in the field of autonomous vehicles.

Abstract

Multi-agent learning algorithms have been successful at generating superhuman planning in a wide variety of games but have had little impact on the design of deployed multi-agent planners. A key bottleneck in applying these techniques to multi-agent planning is that they require billions of steps of experience. To enable the study of multi-agent planning at this scale, we present GPUDrive, a GPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine that can generate over a million steps of experience per second. Observation, reward, and dynamics functions are written directly in C++, allowing users to define complex, heterogeneous agent behaviors that are lowered to high-performance CUDA. We show that using GPUDrive we are able to effectively train reinforcement learning agents over many scenes in the Waymo Motion dataset, yielding highly effective goal-reaching agents in minutes for individual scenes and generally capable agents in a few hours. We ship these trained agents as part of the code base at https://github.com/Emerge-Lab/gpudrive.