< Explain other AI papers

Can Large Language Models Understand Symbolic Graphics Programs?

Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M. Collins, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf

2024-08-16

Can Large Language Models Understand Symbolic Graphics Programs?

Summary

This paper investigates whether large language models (LLMs) can understand symbolic graphics programs, which are used to create visual content.

What's the problem?

Assessing how well LLMs understand tasks can be difficult because they are often trained on similar types of data. Symbolic graphics programs are unique because they can generate images based on code, but it's unclear if LLMs can effectively comprehend these programs to answer questions about the visuals they create.

What's the solution?

The authors created a benchmark to evaluate LLMs based on their ability to answer questions related to symbolic graphics programs. They tested the models by asking them questions that would be easy to answer if they could visualize the graphics produced by the programs. To enhance the models' understanding, they introduced a method called Symbolic Instruction Tuning (SIT), which fine-tunes the models using questions and images generated from symbolic programs.

Why it matters?

This research is significant because it helps determine the limits of LLMs in understanding complex visual tasks. By improving how these models can interpret and generate graphics content, it could lead to better applications in fields like computer graphics, game design, and education.

Abstract

Assessing the capabilities of large language models (LLMs) is often challenging, in part, because it is hard to find tasks to which they have not been exposed during training. We take one step to address this challenge by turning to a new task: focusing on symbolic graphics programs, which are a popular representation for graphics content that procedurally generates visual data. LLMs have shown exciting promise towards program synthesis, but do they understand symbolic graphics programs? Unlike conventional programs, symbolic graphics programs can be translated to graphics content. Here, we characterize an LLM's understanding of symbolic programs in terms of their ability to answer questions related to the graphics content. This task is challenging as the questions are difficult to answer from the symbolic programs alone -- yet, they would be easy to answer from the corresponding graphics content as we verify through a human experiment. To understand symbolic programs, LLMs may need to possess the ability to imagine how the corresponding graphics content would look without directly accessing the rendered visual content. We use this task to evaluate LLMs by creating a large benchmark for the semantic understanding of symbolic graphics programs. This benchmark is built via program-graphics correspondence, hence requiring minimal human efforts. We evaluate current LLMs on our benchmark to elucidate a preliminary assessment of their ability to reason about visual scenes from programs. We find that this task distinguishes existing LLMs and models considered good at reasoning perform better. Lastly, we introduce Symbolic Instruction Tuning (SIT) to improve this ability. Specifically, we query GPT4-o with questions and images generated by symbolic programs. Such data are then used to finetune an LLM. We also find that SIT data can improve the general instruction following ability of LLMs.