< Explain other AI papers

Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability

Qihan Ren, Peng Wang, Ruikun Cai, Shuai Shao, Dadi Guo, Yuejin Xie, Yafu Li, Quanshi Zhang, Xia Hu, Jing Shao, Dongrui Liu

2026-04-10

Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability

Summary

This research challenges the common idea that simply training a large language model (LLM) with examples of correct answers (supervised finetuning) only helps it memorize, while using reinforcement learning is what allows it to truly learn to solve new problems. The paper investigates how well LLMs actually *generalize* their reasoning abilities after being trained with detailed, step-by-step examples.

What's the problem?

Many people believe that when you train an LLM to reason by showing it how to solve problems step-by-step, it’s just memorizing those specific examples and won’t be able to handle new, slightly different problems. The researchers wanted to figure out if this is true, and if not, what factors influence whether an LLM can actually learn to reason and apply that reasoning to unseen situations. Specifically, they questioned why some reasoning models seemed to fail when tested on problems outside of their original training data.

What's the solution?

The researchers retrained several LLMs on reasoning tasks, carefully tracking their performance as training progressed. They discovered that initial poor performance on new problems isn’t necessarily a sign of a lack of generalization; sometimes the model just needs more training to overcome an initial dip in performance. They also found that the *quality* of the training data is crucial – clear, verified step-by-step solutions help generalization, while flawed examples hinder it. Finally, they showed that more powerful LLMs are better at identifying and learning underlying reasoning *strategies* (like working backwards to solve a problem), while weaker models just copy the style of the examples they’re given.

Why it matters?

This work is important because it shows that supervised finetuning *can* lead to genuine reasoning abilities in LLMs, but it’s not guaranteed. It highlights that we need to train these models for longer, use high-quality data, and leverage the capabilities of more powerful base models to unlock their full potential. It also points out a trade-off: improving reasoning skills can sometimes come at the cost of safety, meaning we need to be mindful of unintended consequences when training these models.

Abstract

A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for reasoning SFT with long chain-of-thought (CoT) supervision and find that cross-domain generalization is not absent but conditional, jointly shaped by optimization dynamics, training data, and base-model capability. Some reported failures are under-optimization artifacts: cross-domain performance first degrades before recovering and improving with extended training (a dip-and-recovery pattern), so shorttraining checkpoints can underestimate generalization. Data quality and structure both matter: low-quality solutions broadly hurt generalization,while verified long-CoT traces yield consistent cross-domain gains. Model capability is essential: stronger models internalize transferable procedural patterns (e.g., backtracking) even from a toy arithmetic game, while weaker ones imitate surface verbosity. This generalization is asymmetric, however: reasoning improves while safety degrades, reframing the question from whether reasoning SFT generalizes to under what conditions and at what cost.