Weak-Driven Learning: How Weak Agents make Strong Agents Stronger
Zehao Chen, Gongxun Li, Tianxiang Ai, Yifei Li, Zixuan Huang, Wang Zhou, Fuzhen Zhuang, Xianglong Liu, Jianxin Li, Deqing Wang, Yikun Ban
2026-02-10
Summary
This paper explores a way to continue improving large language models, even after they've already been trained a lot and seem to have reached their peak performance.
What's the problem?
Large language models get really good at predicting things, but eventually, further training doesn't help much anymore. It's like they become overconfident and stop learning effectively. Existing methods just keep reinforcing what the model *already* thinks is right, but the paper argues there's still useful information hidden in the model's earlier, less confident attempts.
What's the solution?
The researchers developed a technique called WMSS, which stands for Weak Agents Can Make Strong Agents Stronger. Basically, they go back to earlier versions of the model – the 'weak agents' – and use those to identify areas where the current, strong model is still shaky. They then focus on retraining those specific areas, essentially filling in the gaps in its knowledge. This is done by looking at how uncertain the model was in the past and then specifically reinforcing those areas.
Why it matters?
This is important because it allows us to get even better performance out of large language models without needing to drastically increase their size or training time. Plus, it doesn't add any extra cost when you actually *use* the model for things like answering questions or writing code, only during the improvement phase.
Abstract
As post-training optimization becomes central to improving large language models, we observe a persistent saturation bottleneck: once models grow highly confident, further training yields diminishing returns. While existing methods continue to reinforce target predictions, we find that informative supervision signals remain latent in models' own historical weak states. Motivated by this observation, we propose WMSS (Weak Agents Can Make Strong Agents Stronger), a post-training paradigm that leverages weak checkpoints to guide continued optimization. By identifying recoverable learning gaps via entropy dynamics and reinforcing them through compensatory learning, WMSS enables strong agents to improve beyond conventional post-training saturation. Experiments on mathematical reasoning and code generation datasets show that agents trained with our approach achieve effective performance improvements, while incurring zero additional inference cost.