< Explain other AI papers

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

Muxi Diao, Lele Yang, Wuxuan Gong, Yutong Zhang, Zhonghao Yan, Yufei Han, Kongming Liang, Weiran Xu, Zhanyu Ma

2026-01-08

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

Summary

This paper investigates why adapting large language models to specific tasks using a common method called Supervised Fine-Tuning (SFT) often makes them forget what they already knew, and proposes a new method to prevent this forgetting while still achieving good performance on the new task.

What's the problem?

When you try to teach a large language model a new skill using SFT, it often gets really good at that skill, but it simultaneously becomes worse at things it already knew how to do – this is called catastrophic forgetting. The researchers found that this happens because SFT forces the model to change its internal understanding to match the new training data, even when that new data contradicts what the model already 'believes' to be true. This creates a conflict where the model is confident in its original prediction but is penalized for it during training, leading to harmful adjustments.

What's the solution?

The researchers developed a new fine-tuning method called Entropy-Adaptive Fine-Tuning (EAFT). EAFT doesn't just look at how sure the model is about its predictions, but also how *uncertain* it is. It uses this uncertainty to decide when to update the model. If the model is unsure, it learns from the new data. But if the model is confident and the new data disagrees, EAFT reduces the impact of that conflicting data on the model's learning process, preventing it from forgetting its previous knowledge.

Why it matters?

This research is important because it provides a way to adapt large language models to new tasks without sacrificing their existing abilities. This is crucial for building more versatile and reliable AI systems that can perform a wide range of tasks without constantly needing to be retrained from scratch. The EAFT method shows promising results across different types of tasks and model sizes, suggesting it could be a valuable tool for anyone working with large language models.

Abstract

Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as "Confident Conflicts" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.