< Explain other AI papers

HelpSteer3-Preference: Open Human-Annotated Preference Data across Diverse Tasks and Languages

Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Hoo-Chang Shin, Felipe Soares, Alexander Bukharin, Ellie Evans, Yi Dong, Oleksii Kuchaiev

2025-05-20

HelpSteer3-Preference: Open Human-Annotated Preference Data across
  Diverse Tasks and Languages

Summary

This paper talks about HelpSteer3-Preference, a new dataset where people have carefully rated and given feedback on different AI responses for a variety of tasks and languages.

What's the problem?

The problem is that AI models need to learn what kinds of answers people actually like, but it's hard to teach them this without a lot of high-quality examples showing human preferences across different situations and languages.

What's the solution?

To solve this, the researchers created a large, well-organized collection of human feedback covering many types of tasks and languages. This data is then used to train AI models so they can better understand and match what people want when giving answers.

Why it matters?

This matters because it helps make AI more helpful, fair, and accurate for people everywhere, no matter what language they speak or what kind of help they need.

Abstract

HelpSteer3-Preference, a high-quality human-annotated dataset, enhances Reward Models for Reinforcement Learning from Human Feedback, achieving top performance on RM-Bench and JudgeBench.