Does DINOv3 Set a New Medical Vision Standard?
Che Liu, Yinda Chen, Haoyuan Shi, Jinpeng Lu, Bailiang Jian, Jiazhen Pan, Linghan Cai, Jiayi Wang, Yundi Zhang, Jun Li, Cosmin I. Bercea, Cheng Ouyang, Chen Chen, Zhiwei Xiong, Benedikt Wiestler, Christian Wachinger, Daniel Rueckert, Wenjia Bai, Rossella Arcucci
2025-09-09
Summary
This paper investigates whether a powerful image recognition model, DINOv3, trained on everyday pictures, can be used directly for medical image analysis without needing extra training specifically on medical data.
What's the problem?
Typically, computer vision models need to be trained from scratch or fine-tuned on massive datasets of medical images to perform well in areas like identifying diseases in scans. This is expensive and time-consuming. The question is whether a model already good at understanding general images can be adapted to medical imaging without this specialized training, and if so, how well it would perform compared to models built specifically for medical use.
What's the solution?
Researchers tested DINOv3 on a variety of common medical imaging tasks, including classifying 2D and 3D images (like X-rays and CT scans) and segmenting images to pinpoint specific areas of interest. They experimented with different sizes of the DINOv3 model and different image resolutions to see how performance changed. They then compared DINOv3’s results to those of models already trained on medical images.
Why it matters?
The findings show DINOv3 performs surprisingly well, even beating some medical-specific models on certain tasks. This suggests it’s possible to leverage models trained on general images as a strong starting point for medical image analysis, potentially saving time and resources. However, the study also identified areas where DINOv3 struggles, like with very specialized medical images, and showed that simply making the model bigger doesn’t always lead to better results in the medical field. This work provides a new benchmark for future research and suggests ways to improve these models for medical applications, like using DINOv3’s features to create more accurate 3D reconstructions.
Abstract
The advent of large-scale vision foundation models, pre-trained on diverse natural images, has marked a paradigm shift in computer vision. However, how the frontier vision foundation models' efficacies transfer to specialized domains remains such as medical imaging remains an open question. This report investigates whether DINOv3, a state-of-the-art self-supervised vision transformer (ViT) that features strong capability in dense prediction tasks, can directly serve as a powerful, unified encoder for medical vision tasks without domain-specific pre-training. To answer this, we benchmark DINOv3 across common medical vision tasks, including 2D/3D classification and segmentation on a wide range of medical imaging modalities. We systematically analyze its scalability by varying model sizes and input image resolutions. Our findings reveal that DINOv3 shows impressive performance and establishes a formidable new baseline. Remarkably, it can even outperform medical-specific foundation models like BiomedCLIP and CT-Net on several tasks, despite being trained solely on natural images. However, we identify clear limitations: The model's features degrade in scenarios requiring deep domain specialization, such as in Whole-Slide Pathological Images (WSIs), Electron Microscopy (EM), and Positron Emission Tomography (PET). Furthermore, we observe that DINOv3 does not consistently obey scaling law in the medical domain; performance does not reliably increase with larger models or finer feature resolutions, showing diverse scaling behaviors across tasks. Ultimately, our work establishes DINOv3 as a strong baseline, whose powerful visual features can serve as a robust prior for multiple complex medical tasks. This opens promising future directions, such as leveraging its features to enforce multiview consistency in 3D reconstruction.