Self-Consistency Preference Optimization
Archiki Prasad, Weizhe Yuan, Richard Yuanzhe Pang, Jing Xu, Maryam Fazel-Zarandi, Mohit Bansal, Sainbayar Sukhbaatar, Jason Weston, Jane Yu
2024-11-07

Summary
This paper presents Self-Consistency Preference Optimization (ScPO), a new method that helps models learn to improve their answers on complex reasoning tasks without needing human feedback.
What's the problem?
Current methods for training models often struggle with complex reasoning tasks because it's hard to give them the right rewards for their answers. This makes it difficult for models to learn effectively and improve over time.
What's the solution?
The researchers developed ScPO, which allows models to prefer consistent answers over inconsistent ones during their training. This method uses self-consistency, where the model generates multiple answers and learns from them. By focusing on consistent responses, ScPO helps the model improve its performance on reasoning tasks like math and logic problems. The results showed that models trained with ScPO performed better than those trained with traditional methods, even matching the performance of those trained with human-provided answers.
Why it matters?
This research is important because it demonstrates a way for models to learn and improve independently, making them more efficient and effective in solving complex problems. By reducing reliance on human feedback, ScPO could lead to faster advancements in AI technology and better performance in various applications.
Abstract
Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at inference time based on multiple sampling in order to find the most consistent answer. In this work, we extend the self-consistency concept to help train models. We thus introduce self-consistency preference optimization (ScPO), which iteratively trains consistent answers to be preferred over inconsistent ones on unsupervised new problems. We show ScPO leads to large improvements over conventional reward model training on reasoning tasks such as GSM8K and MATH, closing the gap with supervised training with gold answers or preferences, and that combining ScPO with standard supervised learning improves results even further. On ZebraLogic, ScPO finetunes Llama-3 8B to be superior to Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku.