Hybrid Preferences: Learning to Route Instances for Human vs. AI Feedback
Lester James V. Miranda, Yizhong Wang, Yanai Elazar, Sachin Kumar, Valentina Pyatkin, Faeze Brahman, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi
2024-10-28

Summary
This paper discusses a new method called Hybrid Preferences, which combines human feedback and AI-generated feedback to improve the quality of annotations used for training language models.
What's the problem?
Collecting human feedback for training language models can be expensive and time-consuming. While AI can generate feedback more quickly and consistently, it may also introduce biases and errors. This creates a challenge in ensuring that the data used to train models is both accurate and efficient to obtain.
What's the solution?
The authors introduce a routing framework that intelligently decides which instances should receive human annotations and which can be annotated by AI. They formulate this as an optimization problem, using a new dataset called MultiPref, which contains examples labeled by both humans and AI. The framework predicts which combination of human and AI feedback will lead to the best performance in training language models. Their experiments show that using this hybrid approach improves model performance compared to relying solely on human or AI feedback.
Why it matters?
This research is important because it demonstrates a more efficient way to gather high-quality annotations for training language models. By reducing costs and improving annotation quality, this method can help advance the development of AI systems that better align with human preferences, leading to more effective applications in various fields.
Abstract
Learning from human feedback has enabled the alignment of language models (LMs) with human preferences. However, directly collecting human preferences can be expensive, time-consuming, and can have high variance. An appealing alternative is to distill preferences from LMs as a source of synthetic annotations as they are more consistent, cheaper, and scale better than human annotation; however, they are also prone to biases and errors. In this work, we introduce a routing framework that combines inputs from humans and LMs to achieve better annotation quality, while reducing the total cost of human annotation. The crux of our approach is to identify preference instances that will benefit from human annotations. We formulate this as an optimization problem: given a preference dataset and an evaluation metric, we train a performance prediction model to predict a reward model's performance on an arbitrary combination of human and LM annotations and employ a routing strategy that selects a combination that maximizes predicted performance. We train the performance prediction model on MultiPref, a new preference dataset with 10K instances paired with human and LM labels. We show that the selected hybrid mixture of LM and direct human preferences using our routing framework achieves better reward model performance compared to using either one exclusively. We simulate selective human preference collection on three other datasets and show that our method generalizes well to all three. We analyze features from the routing model to identify characteristics of instances that can benefit from human feedback, e.g., prompts with a moderate safety concern or moderate intent complexity. We release the dataset, annotation platform, and source code used in this study to foster more efficient and accurate preference collection in the future.