< Explain other AI papers

Self-Improving VLM Judges Without Human Annotations

Inna Wanyin Lin, Yushi Hu, Shuyue Stella Li, Scott Geng, Pang Wei Koh, Luke Zettlemoyer, Tim Althoff, Marjan Ghazvininejad

2025-12-08

Self-Improving VLM Judges Without Human Annotations

Summary

This paper introduces a new way to create a good 'judge' for Vision-Language Models (VLMs), which are AI systems that can understand both images and text. Instead of relying on people to tell the AI what's good or bad, this research develops a system where the AI essentially teaches itself.

What's the problem?

Developing VLMs requires a way to evaluate how well they're doing. Traditionally, this is done by having humans compare different responses from the AI and rank them based on quality. However, getting these human evaluations is expensive and time-consuming, and as VLMs get better quickly, the human feedback can become outdated. It's hard to keep up with a rapidly improving AI using only human judgment.

What's the solution?

The researchers created a three-step process to train an AI judge without any human input. First, the system generates lots of different image-text pairs, some good and some not so good. Then, it creates 'reasoning traces' – explanations of why a response is good or bad – and filters out the pairs where the reasoning doesn't match the expected quality. Finally, the AI learns from these correct answers and their explanations, improving its ability to judge future responses. They used a Llama-3.2-11B model as the base for their judge.

Why it matters?

This research is important because it shows we can build effective AI judges without constant human involvement. The AI judge they created performed surprisingly well, even beating much larger and more complex models like GPT-4o in some areas. This opens the door to creating AI systems that can continuously improve themselves, keeping pace with the rapid advancements in VLM technology, and reducing the need for costly human feedback.

Abstract

Effective judges of Vision-Language Models (VLMs) are crucial for model development. Current methods for training VLM judges mainly rely on large-scale human preference annotations. However, such an approach is costly, and the annotations easily become obsolete as models rapidly improve. In this work, we present a framework to self-train a VLM judge model without any human preference annotations, using only self-synthesized data. Our method is iterative and has three stages: (1) generate diverse multimodal instruction-response pairs at varying quality levels, (2) generate reasoning traces and judgments for each pair, removing the ones that do not match our expected quality levels, and (3) training on correct judge answers and their reasoning traces. We evaluate the resulting judge on Multimodal RewardBench and VL-RewardBench across domains: correctness, preference, reasoning, safety, and visual question-answering. Our method improves a Llama-3.2-11B multimodal judge from 0.38 to 0.51 in overall accuracy on VL-RewardBench, often outperforming much larger models including Llama-3.2-90B, GPT-4o, and Claude 3.5 Sonnet, with particularly strong gains in general, hallucination, and reasoning dimensions. The overall strength of these human-annotation-free results suggest the potential for a future self-judge that evolves alongside rapidly improving VLM capabilities.