ViExam: Are Vision Language Models Better than Humans on Vietnamese Multimodal Exam Questions?
Vy Tuong Dang, An Vo, Quang Tau, Duc Dm, Daeyoung Kim
2025-08-21
Summary
This research investigates how well AI systems, specifically Vision Language Models (VLMs) that are usually trained on English, can understand and answer questions in Vietnamese educational tests that involve both pictures and text. They found that these AI models generally perform worse than humans on these Vietnamese tests.
What's the problem?
Most powerful AI systems for understanding images and text, called VLMs, are trained mostly on English data. This makes it unclear if they can work well on real-world educational materials in languages other than English, especially when those materials require understanding both visual and text information. This paper looks at whether VLMs can handle Vietnamese educational assessments.
What's the solution?
The researchers created a new set of Vietnamese educational questions called ViExam, which includes over 2,500 questions that combine images and text across different subjects like math, science, and geography. They then tested several advanced and open-source VLMs on this benchmark to see how accurately they could answer the questions. They also tried using English instructions with Vietnamese questions and explored how human help could improve the AI's performance.
Why it matters?
This work is important because it's the first comprehensive check of how AI systems perform on Vietnamese educational tests that use both images and text. Understanding this helps us see if current AI technology can be useful for education in languages with less available training data, and it highlights the challenges of making AI truly work across different languages and cultures, showing that even advanced AI struggles compared to human students.
Abstract
Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at: https://vi-exam.github.io.