< Explain other AI papers

Prompt-to-Leaderboard

Evan Frick, Connor Chen, Joseph Tennyson, Tianle Li, Wei-Lin Chiang, Anastasios N. Angelopoulos, Ion Stoica

2025-02-26

Prompt-to-Leaderboard

Summary

This paper talks about a new way to evaluate AI language models called Prompt-to-Leaderboard (P2L), which creates specific rankings for each type of question or task, rather than just giving one overall score

What's the problem?

Current methods for testing AI language models usually give them an average score based on how well they do across many different tasks. This doesn't show how a model might be really good at some things but not so good at others, which can be important information

What's the solution?

The researchers created P2L, which uses an AI to predict how well different language models will do on specific tasks or questions. This creates a unique ranking or 'leaderboard' for each type of task, showing which models are best for different situations. They tested this method using data from a chatbot competition and found it gives a more detailed picture of how models perform

Why it matters?

This matters because it helps us understand AI language models better, showing their strengths and weaknesses more clearly. It could help people choose the right AI for specific tasks, make AI systems that combine the strengths of different models, and guide the development of better AI models in the future. The researchers even used this method to create an AI system that became the top performer in a chatbot competition

Abstract

Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance. To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt. The core idea is to train an LLM taking natural language prompts as input to output a vector of Bradley-Terry coefficients which are then used to predict the human preference vote. The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses. Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard. Furthermore, our findings suggest that P2L's ability to produce prompt-specific evaluations follows a power law scaling similar to that observed in LLMs themselves. In January 2025, the router we trained based on this methodology achieved the \#1 spot in the Chatbot Arena leaderboard. Our code is available at this GitHub link: https://github.com/lmarena/p2l.