< Explain other AI papers

MedVLSynther: Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs

Xiaoke Huang, Ningsen Wang, Hui Liu, Xianfeng Tang, Yuyin Zhou

2025-10-31

MedVLSynther: Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs

Summary

This paper introduces a new way to create a large dataset of medical questions that combine images and text, designed to help AI models get better at medical visual question answering (VQA).

What's the problem?

Training AI to answer medical questions using both images (like X-rays) and text is really hard because there aren't many large, publicly available, and reliable datasets for this purpose. Building these datasets manually is expensive and time-consuming, and existing datasets might not be high quality or easily accessible.

What's the solution?

The researchers developed a system called MedVLSynther that automatically generates medical VQA questions from existing biomedical research papers. It uses a two-step process: first, it creates potential questions and answers, and then it rigorously checks them for accuracy, consistency with the image, and clinical correctness. This process results in a dataset called MedSynVQA, containing over 13,000 questions covering a wide range of medical images and body parts. They then used this dataset to further train AI models using a reward system based on how well the AI answers the verified questions.

Why it matters?

This work is important because it provides a scalable and reliable way to create medical VQA training data without relying on manual labeling or potentially sensitive patient information. By using openly available research papers and open-source AI models, the approach is auditable, reproducible, and protects patient privacy, ultimately leading to more accurate and trustworthy AI systems for medical diagnosis and treatment.

Abstract

Large Multimodal Models (LMMs) are increasingly capable of answering medical questions that require joint reasoning over images and text, yet training general medical VQA systems is impeded by the lack of large, openly usable, high-quality corpora. We present MedVLSynther, a rubric-guided generator-verifier framework that synthesizes high-quality multiple-choice VQA items directly from open biomedical literature by conditioning on figures, captions, and in-text references. The generator produces self-contained stems and parallel, mutually exclusive options under a machine-checkable JSON schema; a multi-stage verifier enforces essential gates (self-containment, single correct answer, clinical validity, image-text consistency), awards fine-grained positive points, and penalizes common failure modes before acceptance. Applying this pipeline to PubMed Central yields MedSynVQA: 13,087 audited questions over 14,803 images spanning 13 imaging modalities and 28 anatomical regions. Training open-weight LMMs with reinforcement learning using verifiable rewards improves accuracy across six medical VQA benchmarks, achieving averages of 55.85 (3B) and 58.15 (7B), with up to 77.57 on VQA-RAD and 67.76 on PathVQA, outperforming strong medical LMMs. A Ablations verify that both generation and verification are necessary and that more verified data consistently helps, and a targeted contamination analysis detects no leakage from evaluation suites. By operating entirely on open literature and open-weight models, MedVLSynther offers an auditable, reproducible, and privacy-preserving path to scalable medical VQA training data.