< Explain other AI papers

ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge

Zhilin Wang, Jaehun Jung, Ximing Lu, Shizhe Diao, Ellie Evans, Jiaqi Zeng, Pavlo Molchanov, Yejin Choi, Jan Kautz, Yi Dong

2025-10-23

ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge

Summary

This paper introduces ProfBench, a new way to test how well large language models (LLMs) can handle complex, professional tasks like analyzing documents and writing reports, going beyond simple question answering.

What's the problem?

Currently, it's really hard to accurately judge how good LLMs are because checking their answers often requires specialized knowledge. Existing tests mostly focus on things like math or coding where answers are easily verified. Evaluating LLMs on tasks needing professional expertise – like understanding physics or finance – is difficult and expensive because you need experts to review the responses. This limits our ability to improve these models for real-world applications.

What's the solution?

The researchers created ProfBench, a collection of over 7000 questions and ideal answers, all reviewed by people with advanced degrees in fields like physics, chemistry, finance, and consulting. To make evaluating LLMs on this benchmark more practical, they also developed 'LLM-Judges' – other LLMs trained to act as fair and affordable evaluators, reducing the cost of assessment significantly and minimizing the tendency of LLMs to overestimate their own performance. They then used ProfBench to test several leading LLMs.

Why it matters?

This work is important because it provides a more realistic and challenging benchmark for LLMs. The results show that even the best models still struggle with complex professional tasks, and there's a noticeable difference in performance between models that are privately owned versus those that are openly available. This helps researchers understand where LLMs need to improve to be truly useful in professional settings and highlights the value of 'thinking through' problems carefully when dealing with complicated information.

Abstract

Evaluating progress in large language models (LLMs) is often constrained by the challenge of verifying responses, limiting assessments to tasks like mathematics, programming, and short-form question-answering. However, many real-world applications require evaluating LLMs in processing professional documents, synthesizing information, and generating comprehensive reports in response to user queries. We introduce ProfBench: a set of over 7000 response-criterion pairs as evaluated by human-experts with professional knowledge across Physics PhD, Chemistry PhD, Finance MBA and Consulting MBA. We build robust and affordable LLM-Judges to evaluate ProfBench rubrics, by mitigating self-enhancement bias and reducing the cost of evaluation by 2-3 orders of magnitude, to make it fair and accessible to the broader community. Our findings reveal that ProfBench poses significant challenges even for state-of-the-art LLMs, with top-performing models like GPT-5-high achieving only 65.9\% overall performance. Furthermore, we identify notable performance disparities between proprietary and open-weight models and provide insights into the role that extended thinking plays in addressing complex, professional-domain tasks. Data: https://huggingface.co/datasets/nvidia/ProfBench and Code: https://github.com/NVlabs/ProfBench