< Explain other AI papers

LaSeR: Reinforcement Learning with Last-Token Self-Rewarding

Wenkai Yang, Weijie Liu, Ruobing Xie, Yiju Guo, Lulu Wu, Saiyong Yang, Yankai Lin

2025-10-17

LaSeR: Reinforcement Learning with Last-Token Self-Rewarding

Summary

This paper introduces a new way to improve how well large language models (LLMs) can reason and check their own work, a process called Reinforcement Learning with Verifiable Rewards (RLVR).

What's the problem?

Currently, LLMs are getting better at reasoning with RLVR, but they need to go through two steps: first generating an answer, and then separately verifying it. This two-step process is slow and inefficient because it requires using different instructions for each part. The core issue is that existing methods don't effectively combine reasoning and self-checking into a single, streamlined process.

What's the solution?

The researchers discovered a mathematical shortcut: the quality of an LLM's reasoning can be accurately judged by looking at how confidently the model predicts the very last word of its answer. They developed an algorithm called LaSeR that uses this insight. LaSeR essentially trains the LLM to give higher 'scores' to good answers based on this last-word confidence, and aligns those scores with how a separate 'verifier' would rate the answer. This is done by adding a small adjustment to the usual training process, requiring only a tiny bit of extra computation – essentially predicting one extra token.

Why it matters?

This work is important because it makes LLMs more efficient at reasoning and self-checking. By combining these steps and relying on the model’s own predictions, LaSeR speeds up the process and improves performance, especially when dealing with complex problems. It allows LLMs to better assess the quality of their own responses without needing a separate verification step, which is crucial for real-world applications.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model's self-verification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model's next-token log-probability assigned to any pre-specified token at the solution's last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model's reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance.