< Explain other AI papers

S-Chain: Structured Visual Chain-of-Thought For Medicine

Khai Le-Duc, Duy M. H. Nguyen, Phuong T. H. Trinh, Tien-Phat Nguyen, Nghiem T. Diep, An Ngo, Tung Vu, Trinh Vuong, Anh-Tien Nguyen, Mau Nguyen, Van Trung Hoang, Khai-Nguyen Nguyen, Hy Nguyen, Chris Ngo, Anji Liu, Nhat Ho, Anne-Christin Hauschild, Khanh Xuan Nguyen, Thanh Nguyen-Tang, Pengtao Xie, Daniel Sonntag, James Zou

2025-10-29

S-Chain: Structured Visual Chain-of-Thought For Medicine

Summary

This paper introduces a new dataset called S-Chain designed to improve how well artificial intelligence models can reason about medical images and explain their decisions, similar to how a doctor would explain a diagnosis.

What's the problem?

Current medical vision-language models, which try to answer questions about medical images, often give correct answers but don't clearly show *why* they arrived at that answer. It's hard to tell if the model is focusing on the right parts of the image when making its decision, and there wasn't a large, high-quality dataset available that specifically trains models to link their reasoning steps to specific areas within an image. Existing datasets lacked detailed explanations tied to visual evidence.

What's the solution?

The researchers created S-Chain, a dataset of 12,000 medical images with detailed annotations. These annotations include not only the answers to questions about the images, but also step-by-step reasoning, and importantly, bounding boxes that pinpoint exactly which parts of the image support each step of the reasoning. The dataset also supports questions in 16 different languages. They then tested several existing AI models on this dataset and also explored ways to combine the dataset with techniques that allow models to access external medical knowledge to improve their reasoning.

Why it matters?

This work is important because it pushes the field towards more trustworthy and explainable AI in healthcare. By forcing models to justify their answers with visual evidence, we can better understand if they are making decisions for the right reasons, and ultimately, build AI systems that doctors can rely on to assist with diagnosis and treatment.

Abstract

Faithful reasoning in medical vision-language models (VLMs) requires not only accurate predictions but also transparent alignment between textual rationales and visual evidence. While Chain-of-Thought (CoT) prompting has shown promise in medical visual question answering (VQA), no large-scale expert-level dataset has captured stepwise reasoning with precise visual grounding. We introduce S-Chain, the first large-scale dataset of 12,000 expert-annotated medical images with bounding boxes and structured visual CoT (SV-CoT), explicitly linking visual regions to reasoning steps. The dataset further supports 16 languages, totaling over 700k VQA pairs for broad multilingual applicability. Using S-Chain, we benchmark state-of-the-art medical VLMs (ExGra-Med, LLaVA-Med) and general-purpose VLMs (Qwen2.5-VL, InternVL2.5), showing that SV-CoT supervision significantly improves interpretability, grounding fidelity, and robustness. Beyond benchmarking, we study its synergy with retrieval-augmented generation, revealing how domain knowledge and visual grounding interact during autoregressive reasoning. Finally, we propose a new mechanism that strengthens the alignment between visual evidence and reasoning, improving both reliability and efficiency. S-Chain establishes a new benchmark for grounded medical reasoning and paves the way toward more trustworthy and explainable medical VLMs.