< Explain other AI papers

The Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping

Yang Liu, Enxi Wang, Yufei Gao, Weixin Zhang, Bo Wang, Zhiyuan Zeng, Yikai Zhang, Yining Zheng, Xipeng Qiu

2026-04-14

The Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping

Summary

This paper addresses a problem with training large language models using reinforcement learning: they often get stuck repeating the same mistakes, limiting how well they learn.

What's the problem?

When you use reinforcement learning to improve language models, they can become too focused on what they *think* works, even if it's actually wrong. This leads to the model repeatedly making the same errors because it doesn't explore enough different options. Simply encouraging randomness doesn't solve this because it doesn't specifically target and discourage those recurring, bad behaviors.

What's the solution?

The researchers developed a new method called MEDS, which stands for Memory-Enhanced Dynamic reward Shaping. It works by remembering what the model did in previous attempts. Specifically, it stores information about the model's internal 'thinking' during each attempt and then groups together similar failure patterns. If the model starts repeating a common mistake, MEDS gives it a bigger penalty, pushing it to try something different. This helps the model explore more diverse solutions and avoid getting stuck in loops of errors.

Why it matters?

This research is important because it improves the reliability and performance of large language models. By preventing them from repeating mistakes, MEDS allows them to learn more effectively and achieve better results on various tasks. The improvements shown across different models and datasets suggest this is a generally useful technique for training these powerful AI systems.

Abstract

Despite the success of reinforcement learning for large language models, a common failure mode is reduced sampling diversity, where the policy repeatedly generates similar erroneous behaviors. Classical entropy regularization encourages randomness under the current policy, but does not explicitly discourage recurrent failure patterns across rollouts. We propose MEDS, a Memory-Enhanced Dynamic reward Shaping framework that incorporates historical behavioral signals into reward design. By storing and leveraging intermediate model representations, we capture features of past rollouts and use density-based clustering to identify frequently recurring error patterns. Rollouts assigned to more prevalent error clusters are penalized more heavily, encouraging broader exploration while reducing repeated mistakes. Across five datasets and three base models, MEDS consistently improves average performance over existing baselines, achieving gains of up to 4.13 pass@1 points and 4.37 pass@128 points. Additional analyses using both LLM-based annotations and quantitative diversity metrics show that MEDS increases behavioral diversity during sampling.