Curia: A Multi-Modal Foundation Model for Radiology
Corentin Dancette, Julien Khlaut, Antoine Saporta, Helene Philippe, Elodie Ferreres, Baptiste Callard, Théo Danielou, Léo Alberge, Léo Machado, Daniel Tordjman, Julie Dupuis, Korentin Le Floch, Jean Du Terrail, Mariam Moshiri, Laurent Dercle, Tom Boeken, Jules Gregory, Maxime Ronot, François Legou, Pascal Roux, Marc Sapoval, Pierre Manceron
2025-09-10

Summary
This paper introduces Curia, a new artificial intelligence model designed to help doctors read medical images like X-rays and CT scans. It's a big step towards AI that can handle a wide variety of imaging tasks, instead of being limited to just one specific thing.
What's the problem?
Currently, most AI tools for analyzing medical images are very specialized. They're good at finding one specific problem, like a broken bone, but can't easily switch to identifying something else, like a tumor. This is a problem because hospitals use many different types of scans to diagnose many different conditions, and it's impractical to have a separate AI for each one. Also, training these AIs usually requires a huge amount of labeled data, which can be hard to get.
What's the solution?
The researchers created Curia by training a single AI model on a massive collection of over 150,000 real-world medical scans from a hospital – totaling 130 terabytes of data. This approach, using what are called 'foundation models,' allows the AI to learn general features of medical images and then adapt to specific tasks. They then tested Curia on 19 different tasks, like identifying organs, detecting diseases, and predicting how far cancer has spread, and made the model publicly available for others to use.
Why it matters?
Curia is important because it performs as well as, or even better than, human radiologists on many tasks. It also shows that foundation models can work well in radiology, even when there isn't a lot of labeled data available for a specific condition. This could lead to faster and more accurate diagnoses, and ultimately improve patient care. By releasing the model, the researchers hope to encourage further development in this field.
Abstract
AI-assisted radiological interpretation is based on predominantly narrow, single-task models. This approach is impractical for covering the vast spectrum of imaging modalities, diseases, and radiological findings. Foundation models (FMs) hold the promise of broad generalization across modalities and in low-data settings. However, this potential has remained largely unrealized in radiology. We introduce Curia, a foundation model trained on the entire cross-sectional imaging output of a major hospital over several years, which to our knowledge is the largest such corpus of real-world data-encompassing 150,000 exams (130 TB). On a newly curated 19-task external validation benchmark, Curia accurately identifies organs, detects conditions like brain hemorrhages and myocardial infarctions, and predicts outcomes in tumor staging. Curia meets or surpasses the performance of radiologists and recent foundation models, and exhibits clinically significant emergent properties in cross-modality, and low-data regimes. To accelerate progress, we release our base model's weights at https://huggingface.co/raidium/curia.