GenARM: Reward Guided Generation with Autoregressive Reward Model for Test-time Alignment
Yuancheng Xu, Udari Madhushani Sehwag, Alec Koppel, Sicheng Zhu, Bang An, Furong Huang, Sumitra Ganesh
2024-10-14

Summary
This paper introduces GenARM, a new method that helps large language models (LLMs) better align their responses with human preferences during the generation process without needing to be retrained.
What's the problem?
Large language models are powerful, but they often need to be adjusted to match what humans want in their responses. Traditional methods for doing this require a lot of time and resources because they involve retraining the models with human feedback. This can be costly and inefficient, especially when trying to cater to different user preferences.
What's the solution?
GenARM solves this problem by using a special type of reward model called the Autoregressive Reward Model. Instead of evaluating complete responses, this model predicts rewards for each next word based on what has already been generated. This allows the LLMs to stay 'frozen' (not retrained) while still being guided toward better outputs. The authors show that this method significantly improves the quality of the responses and allows for flexible adjustments based on user preferences without the high costs of full retraining.
Why it matters?
This research is important because it enhances how AI models respond to users, making them more effective and aligned with human expectations. By improving the way these models generate text, GenARM can lead to better applications in areas like customer service, content creation, and any field where understanding and responding to human needs is crucial.
Abstract
Large Language Models (LLMs) exhibit impressive capabilities but require careful alignment with human preferences. Traditional training-time methods finetune LLMs using human preference datasets but incur significant training costs and require repeated training to handle diverse user preferences. Test-time alignment methods address this by using reward models (RMs) to guide frozen LLMs without retraining. However, existing test-time approaches rely on trajectory-level RMs which are designed to evaluate complete responses, making them unsuitable for autoregressive text generation that requires computing next-token rewards from partial responses. To address this, we introduce GenARM, a test-time alignment approach that leverages the Autoregressive Reward Model--a novel reward parametrization designed to predict next-token rewards for efficient and effective autoregressive generation. Theoretically, we demonstrate that this parametrization can provably guide frozen LLMs toward any distribution achievable by traditional RMs within the KL-regularized reinforcement learning framework. Experimental results show that GenARM significantly outperforms prior test-time alignment baselines and matches the performance of training-time methods. Additionally, GenARM enables efficient weak-to-strong guidance, aligning larger LLMs with smaller RMs without the high costs of training larger models. Furthermore, GenARM supports multi-objective alignment, allowing real-time trade-offs between preference dimensions and catering to diverse user preferences without retraining.