Can Multimodal LLMs See Materials Clearly? A Multimodal Benchmark on Materials Characterization
Zhengzhao Lai, Youbin Zheng, Zhenyang Cai, Haonan Lyu, Jinpu Yang, Hongqing Liang, Yan Hu, Benyou Wang
2025-09-19
Summary
This paper introduces a new way to test how well artificial intelligence, specifically large language models that can 'see' images, understand images taken when studying materials. These images are used to figure out how a material is made, what it looks like internally, and how those things affect its properties.
What's the problem?
Currently, AI is getting good at things like writing and predicting, but it hasn't been thoroughly tested on understanding the complex images materials scientists use every day. These images require a lot of specialized knowledge to interpret correctly, and existing AI models aren't performing as well as human experts when analyzing them. The AI struggles with questions that need deeper understanding and careful visual analysis.
What's the solution?
The researchers created a benchmark called MatCha, which is a collection of 1,500 questions about materials characterization images. These questions cover the entire process of materials research and are designed to be challenging, even for experts. They then tested several state-of-the-art AI models on MatCha to see how they performed compared to humans. They found that the AI models weren't up to par, even when using techniques to help them 'think' step-by-step.
Why it matters?
This work is important because it highlights the limitations of current AI in a crucial scientific field. By creating MatCha, the researchers provide a tool for developing better AI that can actually help with things like discovering new materials and creating automated systems for scientific research. It’s a step towards AI that can truly assist scientists, not just generate text.
Abstract
Materials characterization is fundamental to acquiring materials information, revealing the processing-microstructure-property relationships that guide material design and optimization. While multimodal large language models (MLLMs) have recently shown promise in generative and predictive tasks within materials science, their capacity to understand real-world characterization imaging data remains underexplored. To bridge this gap, we present MatCha, the first benchmark for materials characterization image understanding, comprising 1,500 questions that demand expert-level domain expertise. MatCha encompasses four key stages of materials research comprising 21 distinct tasks, each designed to reflect authentic challenges faced by materials scientists. Our evaluation of state-of-the-art MLLMs on MatCha reveals a significant performance gap compared to human experts. These models exhibit degradation when addressing questions requiring higher-level expertise and sophisticated visual perception. Simple few-shot and chain-of-thought prompting struggle to alleviate these limitations. These findings highlight that existing MLLMs still exhibit limited adaptability to real-world materials characterization scenarios. We hope MatCha will facilitate future research in areas such as new material discovery and autonomous scientific agents. MatCha is available at https://github.com/FreedomIntelligence/MatCha.