< Explain other AI papers

New Desiderata for Direct Preference Optimization

Xiangkun Hu, Tong He, David Wipf

2024-07-15

New Desiderata for Direct Preference Optimization

Summary

This paper discusses new ideas for improving how large language models (LLMs) learn from human feedback to better match what people want in their responses.

What's the problem?

Large language models often use a method called reinforcement learning with human feedback (RLHF) to improve their responses based on what humans prefer. However, this process can be unstable and complicated, making it hard to ensure that the model learns effectively. Additionally, current methods may not always accurately reflect human preferences, leading to inconsistencies in how the models perform.

What's the solution?

The authors propose new evaluation criteria to identify the weaknesses in existing direct preference optimization (DPO) methods, which aim to fine-tune models directly based on human preferences without needing a separate reward model. They suggest an alternative DPO-like approach that addresses these issues and improves how well the model can balance between pre-trained knowledge and actual human preferences. Their experiments show that this new method can enhance performance and reduce errors in the model's responses.

Why it matters?

This research is significant because it aims to make AI systems more aligned with human expectations, leading to better interactions between people and machines. By improving how LLMs learn from feedback, we can create more reliable and user-friendly AI applications in various fields, such as customer service, education, and content creation.

Abstract

Large language models in the past have typically relied on some form of reinforcement learning with human feedback (RLHF) to better align model responses with human preferences. However, because of oft-observed instabilities when implementing these RLHF pipelines, various reparameterization techniques have recently been introduced to sidestep the need for separately learning an RL reward model. Instead, directly fine-tuning for human preferences is achieved via the minimization of a single closed-form training objective, a process originally referred to as direct preference optimization (DPO) and followed by several notable descendants. Although effective in certain real-world settings, we introduce new evaluation criteria that serve to highlight unresolved shortcomings in the ability of existing DPO methods to interpolate between a pre-trained reference model and empirical measures of human preferences, as well as unavoidable trade-offs in how low- and high-quality responses are regularized and constraints are handled. Our insights then motivate an alternative DPO-like loss that provably mitigates these limitations. Empirical results serve to corroborate notable aspects of our analyses.