< Explain other AI papers

Fine-tuning Large Language Models with Human-inspired Learning Strategies in Medical Question Answering

Yushi Yang, Andrew M. Bean, Robert McCraith, Adam Mahdi

2024-08-19

Fine-tuning Large Language Models with Human-inspired Learning Strategies in Medical Question Answering

Summary

This paper discusses how to improve the training of large language models (LLMs) for answering medical questions by using human-inspired learning strategies.

What's the problem?

Training large language models is expensive and requires a lot of data. Traditional methods can be inefficient, and there is a need for better ways to organize and select training data to improve performance without needing as much information.

What's the solution?

The authors explore using curriculum learning, which is a method that organizes training data in a way similar to how humans learn. They tested both curriculum-based and regular training methods across different LLMs to see which worked better for medical question answering. Their findings show that while using human-inspired strategies can improve accuracy, the effectiveness varies depending on the model and dataset used. They also found that letting the model define question difficulty itself was more effective than using human-defined difficulty levels.

Why it matters?

This research is important because it helps make training large language models more efficient, which could lead to better AI systems for healthcare. By improving how these models learn, we can enhance their ability to provide accurate medical information and support better patient outcomes.

Abstract

Training Large Language Models (LLMs) incurs substantial data-related costs, motivating the development of data-efficient training methods through optimised data ordering and selection. Human-inspired learning strategies, such as curriculum learning, offer possibilities for efficient training by organising data according to common human learning practices. Despite evidence that fine-tuning with curriculum learning improves the performance of LLMs for natural language understanding tasks, its effectiveness is typically assessed using a single model. In this work, we extend previous research by evaluating both curriculum-based and non-curriculum-based learning strategies across multiple LLMs, using human-defined and automated data labels for medical question answering. Our results indicate a moderate impact of using human-inspired learning strategies for fine-tuning LLMs, with maximum accuracy gains of 1.77% per model and 1.81% per dataset. Crucially, we demonstrate that the effectiveness of these strategies varies significantly across different model-dataset combinations, emphasising that the benefits of a specific human-inspired strategy for fine-tuning LLMs do not generalise. Additionally, we find evidence that curriculum learning using LLM-defined question difficulty outperforms human-defined difficulty, highlighting the potential of using model-generated measures for optimal curriculum design.