Beyond Multiple Choice: Verifiable OpenQA for Robust Vision-Language RFT
Yesheng Liu, Hao Li, Haiyu Xu, Baoqi Pei, Jiahao Wang, Mingxuan Zhao, Jingshu Zheng, Zheqi He, JG Yao, Bowen Qin, Xi Yang, Jiajun Zhang
2025-11-25
Summary
This paper focuses on improving how we test and train AI models that can understand both images and text, specifically when answering multiple-choice questions. It argues that current methods are flawed because the answer choices themselves can give away clues, rather than truly testing the AI's understanding.
What's the problem?
The main issue is that multiple-choice questions, while easy to grade automatically, aren't a great measure of an AI's actual intelligence. The way the answer options are worded can provide hints, allowing the AI to guess the correct answer without really 'understanding' the question. This is especially problematic when using these questions to 'teach' the AI through a process called reinforcement learning, as it can learn to exploit these hints instead of learning the underlying concepts.
What's the solution?
The researchers developed a system called ReVeL, which transforms multiple-choice questions into open-ended questions. This means instead of picking from options, the AI has to generate its own answer. ReVeL also tries to make sure these open-ended answers can still be checked for correctness. They used this system to train a specific AI model (Qwen2.5-VL) and found it performed better on open-ended question answering tasks. They also showed that ReVeL reveals how much multiple-choice scores are inflated due to those sneaky clues in the options.
Why it matters?
This work is important because it provides a more reliable way to evaluate and improve AI models that handle both images and text. By moving away from multiple-choice questions and towards open-ended questions, we can get a more accurate picture of an AI's true capabilities and train them to be more robust and intelligent. It also helps us understand that current benchmarks might be overestimating how well these models actually perform.
Abstract
Multiple-choice question answering (MCQA) has been a popular format for evaluating and reinforcement fine-tuning (RFT) of modern multimodal language models. Its constrained output format allows for simplified, deterministic automatic verification. However, we find that the options may leak exploitable signals, which makes the accuracy metrics unreliable for indicating real capabilities and encourages explicit or implicit answer guessing behaviors during RFT. We propose ReVeL (Rewrite and Verify by LLM), a framework that rewrites multiple-choice questions into open-form questions while keeping answers verifiable whenever possible. The framework categorizes questions according to different answer types, apply different rewriting and verification schemes, respectively. When applied for RFT, we converted 20k MCQA examples and use GRPO to finetune Qwen2.5-VL models. Models trained on ReVeL-OpenQA match MCQA accuracy on multiple-choice benchmarks and improve OpenQA accuracy by about six percentage points, indicating better data efficiency and more robust reward signals than MCQA-based training. When used for evaluation, ReVeL also reveals up to 20 percentage points of score inflation in MCQA benchmarks (relative to OpenQA), improves judging accuracy, and reduces both cost and latency. We will release code and data publicly.