MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications
Praveen K Kanithi, Clément Christophe, Marco AF Pimentel, Tathagata Raha, Nada Saadi, Hamza Javed, Svetlana Maslenkova, Nasir Hayat, Ronnie Rajan, Shadab Khan
2024-09-12

Summary
This paper talks about MEDIC, a new framework designed to evaluate large language models (LLMs) specifically for their use in clinical applications, going beyond traditional testing methods.
What's the problem?
As LLMs are increasingly used in healthcare, there is a need for better evaluation methods that reflect their real-world performance. Current benchmarks, like the USMLE, may not accurately assess how well these models will perform in practical situations, leading to outdated or irrelevant evaluations when the models are deployed.
What's the solution?
The authors introduce MEDIC, which evaluates LLMs across five key areas: medical reasoning, ethics and bias, understanding data and language, learning from context, and ensuring clinical safety. MEDIC includes a unique cross-examination framework that measures model performance without needing reference outputs. This allows for a comprehensive assessment of how well models can handle tasks like answering medical questions, summarizing information, and generating clinical notes.
Why it matters?
This research is important because it helps ensure that the best language models are selected for healthcare applications. By providing a thorough evaluation method, MEDIC can bridge the gap between theoretical performance and practical application, ultimately improving patient care and safety in medical settings.
Abstract
The rapid development of Large Language Models (LLMs) for healthcare applications has spurred calls for holistic evaluation beyond frequently-cited benchmarks like USMLE, to better reflect real-world performance. While real-world assessments are valuable indicators of utility, they often lag behind the pace of LLM evolution, likely rendering findings obsolete upon deployment. This temporal disconnect necessitates a comprehensive upfront evaluation that can guide model selection for specific clinical applications. We introduce MEDIC, a framework assessing LLMs across five critical dimensions of clinical competence: medical reasoning, ethics and bias, data and language understanding, in-context learning, and clinical safety. MEDIC features a novel cross-examination framework quantifying LLM performance across areas like coverage and hallucination detection, without requiring reference outputs. We apply MEDIC to evaluate LLMs on medical question-answering, safety, summarization, note generation, and other tasks. Our results show performance disparities across model sizes, baseline vs medically finetuned models, and have implications on model selection for applications requiring specific model strengths, such as low hallucination or lower cost of inference. MEDIC's multifaceted evaluation reveals these performance trade-offs, bridging the gap between theoretical capabilities and practical implementation in healthcare settings, ensuring that the most promising models are identified and adapted for diverse healthcare applications.