< Explain other AI papers

MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting

Yuheng Li, Yenho Chen, Yuxiang Lai, Jike Zhong, Vanessa Wildman, Xiaofeng Yang

2025-09-08

MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting

Summary

This paper introduces MedVista3D, a new computer system designed to help doctors better analyze 3D CT scans, which are a type of medical imaging. It aims to improve how these scans are read and understood, ultimately leading to more accurate diagnoses.

What's the problem?

Reading 3D CT scans is really hard! Doctors often miss small details, struggle to see the big picture across hundreds of images, and can be confused by inconsistent wording in reports. Current computer programs aren't good at handling all these issues at once – they either focus on finding small problems or understanding the overall scan, but not both, and they have trouble with the way doctors actually write reports.

What's the solution?

The researchers created MedVista3D, which works by looking at the CT scans on different scales – both focusing on small areas and considering the whole scan at once. It also uses a special technique to understand the meaning of medical reports, even if the wording varies. Essentially, it teaches the computer to connect what it 'sees' in the scan with how a doctor would describe it in words, making it better at both finding problems and understanding the overall health situation.

Why it matters?

This is important because it could significantly reduce errors in medical imaging. By helping doctors catch more problems and understand scans more completely, MedVista3D has the potential to improve patient care and lead to earlier, more accurate diagnoses for a variety of diseases. The system also shows promise in predicting how a patient's condition might change over time.

Abstract

Radiologic diagnostic errors-under-reading errors, inattentional blindness, and communication failures-remain prevalent in clinical practice. These issues often stem from missed localized abnormalities, limited global context, and variability in report language. These challenges are amplified in 3D imaging, where clinicians must examine hundreds of slices per scan. Addressing them requires systems with precise localized detection, global volume-level reasoning, and semantically consistent natural language reporting. However, existing 3D vision-language models are unable to meet all three needs jointly, lacking local-global understanding for spatial reasoning and struggling with the variability and noise of uncurated radiology reports. We present MedVista3D, a multi-scale semantic-enriched vision-language pretraining framework for 3D CT analysis. To enable joint disease detection and holistic interpretation, MedVista3D performs local and global image-text alignment for fine-grained representation learning within full-volume context. To address report variability, we apply language model rewrites and introduce a Radiology Semantic Matching Bank for semantics-aware alignment. MedVista3D achieves state-of-the-art performance on zero-shot disease classification, report retrieval, and medical visual question answering, while transferring well to organ segmentation and prognosis prediction. Code and datasets will be released.