HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks
Fengji Zhang, Linquan Wu, Huiyu Bai, Guancheng Lin, Xiao Li, Xiao Yu, Yue Wang, Bei Chen, Jacky Keung
2024-10-17

Summary
This paper introduces HumanEval-V, a new benchmark designed to evaluate how well large multimodal models (LMMs) can understand and reason about visual information while generating code.
What's the problem?
While coding tasks are important for testing AI models, there hasn't been a proper benchmark that assesses LMMs in tasks requiring visual reasoning. Most existing benchmarks do not effectively evaluate how these models handle coding problems that involve understanding images or visual contexts.
What's the solution?
To fill this gap, the authors developed HumanEval-V, which includes 108 entry-level Python coding tasks that require visual understanding. These tasks are adapted from popular coding platforms and include visual elements that are essential for solving the problems. The benchmark also includes carefully crafted test cases to evaluate the models' code solutions thoroughly. The authors tested 19 state-of-the-art LMMs using this benchmark and found that many struggled with these tasks, indicating a need for improvement in their visual reasoning capabilities.
Why it matters?
This research is significant because it highlights the challenges LMMs face in understanding visual information when generating code. By providing a structured way to assess these abilities, HumanEval-V can help guide future improvements in AI models, making them more capable of handling real-world tasks that involve both coding and visual comprehension.
Abstract
Coding tasks have been valuable for evaluating Large Language Models (LLMs), as they demand the comprehension of high-level instructions, complex reasoning, and the implementation of functional programs -- core capabilities for advancing Artificial General Intelligence. Despite the progress in Large Multimodal Models (LMMs), which extend LLMs with visual perception and understanding capabilities, there remains a notable lack of coding benchmarks that rigorously assess these models, particularly in tasks that emphasize visual reasoning. To address this gap, we introduce HumanEval-V, a novel and lightweight benchmark specifically designed to evaluate LMMs' visual understanding and reasoning capabilities through code generation. HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks derived from platforms like CodeForces and Stack Overflow. Each task is adapted by modifying the context and algorithmic patterns of the original problems, with visual elements redrawn to ensure distinction from the source, preventing potential data leakage. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases to ensure a thorough and reliable evaluation of model-generated solutions. We evaluate 19 state-of-the-art LMMs using HumanEval-V, uncovering significant challenges. Proprietary models like GPT-4o achieve only 13% pass@1 and 36.4% pass@10, while open-weight models with 70B parameters score below 4% pass@1. Ablation studies further reveal the limitations of current LMMs in vision reasoning and coding capabilities. These results underscore key areas for future research to enhance LMMs' capabilities. We have open-sourced our code and benchmark at https://github.com/HumanEval-V/HumanEval-V-Benchmark.