CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios
Hangyu Li, Bofeng Cao, Zhaohui Liang, Wuzhen Li, Juyoung Oh, Yuxuan Chen, Shixiao Liang, Hang Zhou, Chengyuan Ma, Jiaxi Liu, Zheng Li, Peng Zhang, KeKe Long, Maolin Liu, Jackson Jiang, Chunlei Yu, Shengxiang Liu, Hongkai Yu, Xiaopeng Li
2025-11-17
Summary
This paper introduces a new dataset called CATS-V2V designed to help improve self-driving cars, specifically when they're facing difficult and unusual traffic situations.
What's the problem?
Self-driving cars rely on data to 'learn' how to drive, but most existing datasets only show normal driving conditions. This limits how well cars can handle challenging situations like bad weather, poor lighting, or complex traffic patterns. Cooperative perception, where cars share information, could help, but there wasn't a good dataset to test and develop these systems for tough scenarios.
What's the solution?
The researchers created CATS-V2V, a dataset collected from two synchronized vehicles driving in 10 different locations and 10 different weather/lighting conditions. It includes a lot of data – LiDAR scans, camera images, and precise location information – totaling 100 driving clips. They also carefully labeled objects in the scenes and aligned all the data so everything matches up in time, making it easier for researchers to use. This dataset is the largest and most detailed of its kind.
Why it matters?
CATS-V2V provides the self-driving car community with a valuable resource to develop and test technologies that can handle complex and dangerous driving situations. By having a dataset focused on these challenging scenarios, researchers can build safer and more reliable autonomous vehicles.
Abstract
Vehicle-to-Vehicle (V2V) cooperative perception has great potential to enhance autonomous driving performance by overcoming perception limitations in complex adverse traffic scenarios (CATS). Meanwhile, data serves as the fundamental infrastructure for modern autonomous driving AI. However, due to stringent data collection requirements, existing datasets focus primarily on ordinary traffic scenarios, constraining the benefits of cooperative perception. To address this challenge, we introduce CATS-V2V, the first-of-its-kind real-world dataset for V2V cooperative perception under complex adverse traffic scenarios. The dataset was collected by two hardware time-synchronized vehicles, covering 10 weather and lighting conditions across 10 diverse locations. The 100-clip dataset includes 60K frames of 10 Hz LiDAR point clouds and 1.26M multi-view 30 Hz camera images, along with 750K anonymized yet high-precision RTK-fixed GNSS and IMU records. Correspondingly, we provide time-consistent 3D bounding box annotations for objects, as well as static scenes to construct a 4D BEV representation. On this basis, we propose a target-based temporal alignment method, ensuring that all objects are precisely aligned across all sensor modalities. We hope that CATS-V2V, the largest-scale, most supportive, and highest-quality dataset of its kind to date, will benefit the autonomous driving community in related tasks.