< Explain other AI papers

Interpretable Physics Reasoning and Performance Taxonomy in Vision-Language Models

Pranav Pawar, Kavish Shah, Akshat Bhalani, Komal Kasat, Dev Mittal, Hadi Gala, Deepali Patil, Nikita Raichada, Monali Deshmukh

2025-09-15

Interpretable Physics Reasoning and Performance Taxonomy in Vision-Language Models

Summary

This paper investigates how well advanced Vision-Language Models (VLMs) understand basic physics concepts, like how things move and interact. It's about testing if these AI systems can actually *reason* about the physical world, not just recognize images or repeat information.

What's the problem?

While VLMs are getting really good at many things, it's unclear if they truly understand fundamental scientific principles, specifically physics. Existing tests aren't good enough to really push their understanding of how the physical world works. There's a need for a better way to evaluate if these models can apply physics knowledge to solve problems.

What's the solution?

The researchers created a new system to test VLMs on their physics knowledge. This system automatically generates over 400 different physics problems covering topics like projectile motion, collisions, mechanics, and fluids. They then tested four different VLMs on these problems, and found that bigger models generally performed better, with the best model scoring 0.815. They also noticed the models were good at straightforward calculations but struggled with problems that required thinking about space and how objects relate to each other.

Why it matters?

This work is important because it provides a standardized and accessible way to measure how well AI understands physics. This helps researchers understand the strengths and weaknesses of these models, and ultimately improve their ability to reason about the real world. It also opens the door for more research into scientific reasoning in AI, making it easier for others to study and build upon this work.

Abstract

As Vision-Language Models (VLMs) grow in sophistication, their ability to perform reasoning is coming under increasing supervision. While they excel at many tasks, their grasp of fundamental scientific principles, such as physics, remains an underexplored frontier. To reflect the advancements in these capabilities, we introduce a novel and accessible framework designed to rigorously evaluate VLMs on their understanding of 2D physics. Our framework features a pragmatic scenario generator that creates a diverse testbed of over 400 problems across four core domains: Projectile Motion, Collision Dynamics, Mechanics, and Fluid Dynamics. Through comprehensive evaluation of four state-of-the-art VLMs, we demonstrate a strong correlation between model scale and reasoning ability, with our top-performing model, Qwen2.5-VL-7B, achieving an overall score of 0.815. We find that while models excel at formulaic problems, they struggle significantly with domains requiring abstract spatial reasoning. By designing this framework, we aim to democratize the study of scientific reasoning in VLMs and foster deeper insights into their capabilities and limitations.