MedAgentsBench: Benchmarking Thinking Models and Agent Frameworks for Complex Medical Reasoning
Xiangru Tang, Daniel Shao, Jiwoong Sohn, Jiapeng Chen, Jiayi Zhang, Jinyu Xiang, Fang Wu, Yilun Zhao, Chenglin Wu, Wenqi Shi, Arman Cohan, Mark Gerstein
2025-03-11
Summary
This paper talks about MedAgentsBench, a tough test for AI that focuses on tricky medical questions needing multi-step thinking, like diagnosing rare conditions or planning treatments, where current AI still struggles.
What's the problem?
Existing medical AI tests are too easy, use inconsistent methods, and don’t consider how well models balance accuracy with speed and cost, making it hard to know which AI is truly helpful for real doctors.
What's the solution?
MedAgentsBench uses hard questions from seven medical datasets, filters them to keep only the toughest cases, and tests AI on how well they reason while tracking speed and resource use.
Why it matters?
This helps identify which AI models are best for real-world medical tasks, like assisting doctors with complex cases, while keeping costs and response times practical for hospitals.
Abstract
Large Language Models (LLMs) have shown impressive performance on existing medical question-answering benchmarks. This high performance makes it increasingly difficult to meaningfully evaluate and differentiate advanced methods. We present MedAgentsBench, a benchmark that focuses on challenging medical questions requiring multi-step clinical reasoning, diagnosis formulation, and treatment planning-scenarios where current models still struggle despite their strong performance on standard tests. Drawing from seven established medical datasets, our benchmark addresses three key limitations in existing evaluations: (1) the prevalence of straightforward questions where even base models achieve high performance, (2) inconsistent sampling and evaluation protocols across studies, and (3) lack of systematic analysis of the interplay between performance, cost, and inference time. Through experiments with various base models and reasoning methods, we demonstrate that the latest thinking models, DeepSeek R1 and OpenAI o3, exhibit exceptional performance in complex medical reasoning tasks. Additionally, advanced search-based agent methods offer promising performance-to-cost ratios compared to traditional approaches. Our analysis reveals substantial performance gaps between model families on complex questions and identifies optimal model selections for different computational constraints. Our benchmark and evaluation framework are publicly available at https://github.com/gersteinlab/medagents-benchmark.