< Explain other AI papers

Dedicated Feedback and Edit Models Empower Inference-Time Scaling for Open-Ended General-Domain Tasks

Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Daniel Egert, Ellie Evans, Hoo-Chang Shin, Felipe Soares, Yi Dong, Oleksii Kuchaiev

2025-03-07

Dedicated Feedback and Edit Models Empower Inference-Time Scaling for
  Open-Ended General-Domain Tasks

Summary

This paper talks about a new way to make AI language models better at open-ended tasks by using separate models for feedback and editing during the process of generating answers

What's the problem?

Current methods for improving AI performance during answer generation (called inference-time scaling) work well for tasks with clear right or wrong answers, like math or coding. However, they struggle with open-ended tasks that don't have a single correct answer, limiting how useful these techniques can be for general conversations or creative tasks

What's the solution?

The researchers created a system that mimics how humans improve their work. They trained three separate AI models: one to generate an initial answer, another to give feedback on that answer, and a third to edit the answer based on the feedback. By running this process multiple times and choosing the best result, they were able to significantly improve the AI's performance on open-ended tasks

Why it matters?

This matters because it makes AI language models better at handling a wider range of tasks, including creative and open-ended ones. The researchers' method performed better than some of the most advanced AI models available, which could lead to more versatile and helpful AI assistants for everyday use. It also shows a way to improve AI performance without necessarily making the models bigger, which could make advanced AI more accessible and efficient

Abstract

Inference-Time Scaling has been critical to the success of recent models such as OpenAI o1 and DeepSeek R1. However, many techniques used to train models for inference-time scaling require tasks to have answers that can be verified, limiting their application to domains such as math, coding and logical reasoning. We take inspiration from how humans make first attempts, ask for detailed feedback from others and make improvements based on such feedback across a wide spectrum of open-ended endeavors. To this end, we collect data for and train dedicated Feedback and Edit Models that are capable of performing inference-time scaling for open-ended general-domain tasks. In our setup, one model generates an initial response, which are given feedback by a second model, that are then used by a third model to edit the response. We show that performance on Arena Hard, a benchmark strongly predictive of Chatbot Arena Elo can be boosted by scaling the number of initial response drafts, effective feedback and edited responses. When scaled optimally, our setup based on 70B models from the Llama 3 family can reach SoTA performance on Arena Hard at 92.7 as of 5 Mar 2025, surpassing OpenAI o1-preview-2024-09-12 with 90.4 and DeepSeek R1 with 92.3.