ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?
Canyu Chen, Jian Yu, Shan Chen, Che Liu, Zhongwei Wan, Danielle Bitterman, Fei Wang, Kai Shu
2024-11-15

Summary
This paper explores whether large language models (LLMs) can outperform traditional machine learning (ML) models in predicting clinical outcomes, using a new benchmark called ClinicalBench.
What's the problem?
While LLMs have shown great potential in processing medical text and performing well on medical exams, traditional ML models like SVM and XGBoost are still widely used for clinical predictions. The key question is whether LLMs can match or exceed the performance of these traditional models in real-world clinical settings.
What's the solution?
The researchers created ClinicalBench, a comprehensive benchmark that includes various clinical prediction tasks, two databases, and multiple types of LLMs and traditional ML models. They conducted extensive tests to compare the predictive abilities of both groups. Their findings revealed that, despite the advancements in LLMs, they still do not outperform traditional ML models in clinical predictions, indicating that LLMs may struggle with clinical reasoning and decision-making.
Why it matters?
This study is significant because it highlights the limitations of LLMs in a critical field like healthcare. It calls for caution when integrating LLMs into clinical applications, emphasizing the need for further development to enhance their reasoning capabilities. By establishing ClinicalBench, the researchers provide a valuable tool for future studies aimed at improving the performance of LLMs in healthcare.
Abstract
Large Language Models (LLMs) hold great promise to revolutionize current clinical systems for their superior capacities on medical text processing tasks and medical licensing exams. Meanwhile, traditional ML models such as SVM and XGBoost have still been mainly adopted in clinical prediction tasks. An emerging question is Can LLMs beat traditional ML models in clinical prediction? Thus, we build a new benchmark ClinicalBench to comprehensively study the clinical predictive modeling capacities of both general-purpose and medical LLMs, and compare them with traditional ML models. ClinicalBench embraces three common clinical prediction tasks, two databases, 14 general-purpose LLMs, 8 medical LLMs, and 11 traditional ML models. Through extensive empirical investigation, we discover that both general-purpose and medical LLMs, even with different model scales, diverse prompting or fine-tuning strategies, still cannot beat traditional ML models in clinical prediction yet, shedding light on their potential deficiency in clinical reasoning and decision-making. We call for caution when practitioners adopt LLMs in clinical applications. ClinicalBench can be utilized to bridge the gap between LLMs' development for healthcare and real-world clinical practice.