< Explain other AI papers

Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models

Sonam Gupta, Yatin Nandwani, Asaf Yehudai, Dinesh Khandelwal, Dinesh Raghu, Sachindra Joshi

2025-02-17

Selective Self-to-Supervised Fine-Tuning for Generalization in Large
  Language Models

Summary

This paper talks about Selective Self-to-Supervised Fine-Tuning (S3FT), a new method to improve how large AI language models (LLMs) learn and perform on tasks without becoming too specialized or losing their ability to generalize.

What's the problem?

When fine-tuning AI models on specific tasks, they often overfit, meaning they get too focused on the training data and lose their ability to handle new or different situations. This makes the models less flexible and reduces their overall usefulness.

What's the solution?

The researchers created S3FT, which fine-tunes AI models in a smarter way. Instead of just using the original training data, S3FT also uses responses the model already got right during training. By combining these correct responses with the original answers, the model avoids overfitting and learns to generalize better. They tested this method on tasks like math, coding, and reading comprehension, showing that it improved performance and reduced overfitting compared to standard fine-tuning methods.

Why it matters?

This matters because it helps make AI models more reliable and versatile. By improving how these models learn, S3FT allows them to perform better on specific tasks while still being able to handle new challenges. This could make AI systems more practical for real-world applications where flexibility and accuracy are both important.

Abstract

Fine-tuning Large Language Models (LLMs) on specific datasets is a common practice to improve performance on target tasks. However, this performance gain often leads to overfitting, where the model becomes too specialized in either the task or the characteristics of the training data, resulting in a loss of generalization. This paper introduces Selective Self-to-Supervised Fine-Tuning (S3FT), a fine-tuning approach that achieves better performance than the standard supervised fine-tuning (SFT) while improving generalization. S3FT leverages the existence of multiple valid responses to a query. By utilizing the model's correct responses, S3FT reduces model specialization during the fine-tuning stage. S3FT first identifies the correct model responses from the training set by deploying an appropriate judge. Then, it fine-tunes the model using the correct model responses and the gold response (or its paraphrase) for the remaining samples. The effectiveness of S3FT is demonstrated through experiments on mathematical reasoning, Python programming and reading comprehension tasks. The results show that standard SFT can lead to an average performance drop of up to 4.4 on multiple benchmarks, such as MMLU and TruthfulQA. In contrast, S3FT reduces this drop by half, i.e. 2.5, indicating better generalization capabilities than SFT while performing significantly better on the fine-tuning tasks.