< Explain other AI papers

From Charts to Code: A Hierarchical Benchmark for Multimodal Models

Jiahao Tang, Henry Hengyuan Zhao, Lijian Wu, Yifei Tao, Dongxing Mao, Yang Wan, Jingru Tan, Min Zeng, Min Li, Alex Jinpeng Wang

2025-10-23

From Charts to Code: A Hierarchical Benchmark for Multimodal Models

Summary

This paper introduces Chart2Code, a new way to test how well artificial intelligence models can understand charts and then write code to create or change them based on instructions.

What's the problem?

Current AI models struggle with tasks that require understanding both visual information, like charts, and then translating that understanding into functional code. Existing tests weren't realistic enough or didn't gradually increase in difficulty, making it hard to accurately measure progress in this area. Basically, it was hard to tell how good these models *really* were at chart-to-code tasks.

What's the solution?

The researchers created Chart2Code, a benchmark with over two thousand tasks divided into three levels of difficulty. The first level asks the AI to recreate a chart, the second asks it to edit an existing chart (like changing the type or adding data), and the third asks it to build a chart from a large table of data. They then tested 25 different state-of-the-art AI models on these tasks, measuring both the correctness of the code generated and how well the resulting chart looked.

Why it matters?

This benchmark is important because it provides a more challenging and realistic test for AI models that need to work with charts and code. The results show that even the best models still have a lot of room for improvement, and Chart2Code can help drive the development of more capable and reliable AI systems that can understand and interact with visual data.

Abstract

We introduce Chart2Code, a new benchmark for evaluating the chart understanding and code generation capabilities of large multimodal models (LMMs). Chart2Code is explicitly designed from a user-driven perspective, capturing diverse real-world scenarios and progressively increasing task difficulty. It consists of three levels: Level 1 (Chart Reproduction) reproduces charts from a reference figure and user query; Level 2 (Chart Editing) involves complex modifications such as changing chart types or adding elements; and Level 3 (Long-Table to Chart Generation) requires models to transform long, information-dense tables into faithful charts following user instructions. To our knowledge, this is the first hierarchical benchmark that reflects practical chart2code usage while systematically scaling task complexity. In total, Chart2Code contains 2,023 tasks across 22 chart types, paired with multi-level evaluation metrics that assess both code correctness and the visual fidelity of rendered charts. We benchmark 25 state-of-the-art (SoTA) LMMs, including both proprietary and the latest open-source models such as GPT-5, Qwen2.5-VL, InternVL3/3.5, MiMo-VL, and Seed-1.6-VL. Experimental results demonstrate that even the SoTA model GPT-5 averages only 0.57 on code-based evaluation and 0.22 on chart-quality assessment across the editing tasks, underscoring the difficulty of Chart2Code. We anticipate this benchmark will drive advances in multimodal reasoning and foster the development of more robust and general-purpose LMMs. Our code and data are available on Chart2Code.