< Explain other AI papers

Do Vision-Language Models Measure Up? Benchmarking Visual Measurement Reading with MeasureBench

Fenfen Lin, Yesheng Liu, Haiyu Xu, Chen Yue, Zheqi He, Mingxuan Zhao, Miguel Hu Chen, Jiakang Liu, JG Yao, Xi Yang

2025-11-04

Do Vision-Language Models Measure Up? Benchmarking Visual Measurement Reading with MeasureBench

Summary

This paper focuses on the difficulty current artificial intelligence models, specifically those that combine vision and language, have with a seemingly simple task: reading measurements from things like gauges and scales.

What's the problem?

While humans can easily understand what a gauge shows, even the most advanced AI models struggle. They can often recognize the numbers and labels, but they frequently misinterpret *where* the pointer is on the scale, leading to incorrect readings. This isn't just a minor issue; it results in significant numerical errors, showing the models don't truly understand the visual information.

What's the solution?

The researchers created a new testing dataset called MeasureBench, filled with both real and computer-generated images of various measurement tools. They also developed a way to automatically create more of these images, allowing them to easily change things like the pointer style, scale details, and background clutter. They then tested several AI models on this dataset and even tried using a technique called reinforcement learning to improve performance on the generated images.

Why it matters?

This work highlights a key weakness in current AI: they struggle with precise spatial understanding. It’s not enough to just recognize numbers; the AI needs to accurately pinpoint locations within an image. By creating MeasureBench, the researchers provide a valuable resource for improving AI’s ability to ‘measure’ the world around it, which is crucial for applications like robotics, self-driving cars, and scientific data analysis.

Abstract

Reading measurement instruments is effortless for humans and requires relatively little domain expertise, yet it remains surprisingly challenging for current vision-language models (VLMs) as we find in preliminary evaluation. In this work, we introduce MeasureBench, a benchmark on visual measurement reading covering both real-world and synthesized images of various types of measurements, along with an extensible pipeline for data synthesis. Our pipeline procedurally generates a specified type of gauge with controllable visual appearance, enabling scalable variation in key details such as pointers, scales, fonts, lighting, and clutter. Evaluation on popular proprietary and open-weight VLMs shows that even the strongest frontier VLMs struggle measurement reading in general. A consistent failure mode is indicator localization: models can read digits or labels but misidentify the key positions of pointers or alignments, leading to big numeric errors despite plausible textual reasoning. We have also conducted preliminary experiments with reinforcement learning over synthetic data, and find encouraging results on in-domain synthetic subset but less promising for real-world images. Our analysis highlights a fundamental limitation of current VLMs in fine-grained spatial grounding. We hope this resource can help future advances on visually grounded numeracy and precise spatial perception of VLMs, bridging the gap between recognizing numbers and measuring the world.