Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model
Yueqin Yin, Shentao Yang, Yujia Xie, Ziyi Yang, Yuting Sun, Hany Awadalla, Weizhu Chen, Mingyuan Zhou
2025-01-08

Summary
This paper talks about a new way to improve how AI language models learn from human feedback, making them better at generating text that people prefer.
What's the problem?
Current methods for teaching AI to write like humans have two main issues. Some methods only give feedback at the end of a long piece of text, which makes it hard for the AI to learn. Other methods give feedback for every single word, which can be too detailed and confusing for the AI.
What's the solution?
The researchers created a new method that breaks text into small, meaningful chunks (like phrases or short sentences) and gives feedback on each chunk. They also made the feedback system smarter by considering where each chunk is in the overall text. This helps the AI understand context better. They tested their method on three different benchmarks and found it worked well.
Why it matters?
This matters because it could help make AI writing assistants, chatbots, and other language tools much better at understanding what humans want and producing more natural, helpful text. It's a step towards AI that can communicate more like humans, which could be useful in many areas like education, customer service, and content creation.
Abstract
Reinforcement learning from human feedback (RLHF) has been widely adopted to align language models (LMs) with human preference. Prior RLHF works typically take a bandit formulation, which, though intuitive, ignores the sequential nature of LM generation and can suffer from the sparse reward issue. While recent works propose dense token-level RLHF, treating each token as an action may be oversubtle to proper reward assignment. In this paper, we seek to get the best of both by training and utilizing a segment-level reward model, which assigns a reward to each semantically complete text segment that spans over a short sequence of tokens. For reward learning, our method allows dynamic text segmentation and compatibility with standard sequence-preference datasets. For effective RL-based LM training against segment reward, we generalize the classical scalar bandit reward normalizers into location-aware normalizer functions and interpolate the segment reward for further densification. With these designs, our method performs competitively on three popular RLHF benchmarks for LM policy: AlpacaEval 2.0, Arena-Hard, and MT-Bench. Ablation studies are conducted to further demonstrate our method.